jdtoronto wrote:
Text::CSV_XS has been the most reliable I have found. It is
flexible and so far I haven't found a 'CSV' file I cannot
handle.
Well I certainly have, though I don't remember what the
difficulties were in detail now (I think it was something
like the "csv" file had spaces after the commas -- the
trouble with csv is that there is no standard for it but
defacto standards).
I've heard that DBD::AnyData with trim=>1 can deal with
spaces after commas, but I haven't tried it myself.
With Text::CSV_XS, you almost certainly want to use the
"binary" option. Otherwise you'll have problems with values
that have extended characters or embedded newlines.
DBD::CSV uses Text::CSV_XS internally, so if Text::CSV_XS
(with binary on) is no good, don't expect DBI::CSV to do any
better. If I remember right, there's something a little
screwy with the way DBD::CSV converts the header row into
database column names (e.g. you may have trouble if there
are spaces in your column descriptions). Either fix-up the
first row of your csv file manually, or look for a way to
tell it what names of the columns will be over-riding the
header row (as I remember it, there *is* a way, though
I don't see it in the man page at the moment).
You should take a look at this:
dbi_dealing_with_csv
|