I'm not sure what your test is supposed to illustrate or what you think memory has to do with speed in this case, but as the maintainer of
I can assure you that it was not built to handle a million rows quickly. Use SQLite or a full RDBMS for data sets of that size. That said, there are some big speed improvements coming to DBD::CSV soon (a new version of its SQL engine
). I'll announce it on this site when its ready.