Much of my time is spent doing just this. Several comments:
Text::CSV_XS has been the most reliable I have found. It is flexible and so far I haven't found a 'CSV' file I cannot handle. Text::CSV::Simple is too simple and won't write CSV format.
When it comes to processing the data. In some cases I import the CSV file into a temp table, using the LOAD functions of MySQL. Then I process the data in the table. This is useful in Tk based apps where I can use Tk::DBI::Table to preview the data for users, especially where they are doing something like mapping input data fields to our database structure.
Where I have bulk data or something is done routinely in a known structure I do as has been suggested earlier. I read the CSV file, process it, spit it out as a file again then use the LOAD from within the Perl script to have MySQL import it. The speed advantage is amazing! I have one job which imports around 100,000 records per day, using DBI to insert them after reading the file 7-8 minutes. Pre-processing takes about 35 seconds and the MySQL load averages 170ms!