One note here. The OP did not mention the size of the CSV data file. DBD::CSV uses Text::CSV_XS under the hood, but it will have to read the complete file into memory to be able to do any database-like operations. With a file of 2Gb, that might result in say 20Gb of memory use (perl overhead). When files are that big - again, I don't know how large the file of the OP is - switching to basic streamed IO processing is usually a lot easier.
I fully agree though that DBD::CSV is the best step towards RDBMS's where those memory limits are not applicable (for the end-user script).
YMMV
update: I just did a quick test with the OP data extended to a 1Mb CSV file. Reading that into memory using getline_all () resulted in a 10Mb data structure (reported by Devel::Size::total_size ()).
In reply to Re^3: CSV Manipulation
by Tux
in thread CSV Manipulation
by packetstormer
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |