in reply to Re: Efficiency issue when reading large CSV files
in thread Efficiency issue when reading large CSV files

Thank you very much Tux!
Indeed, with this new module and the "getline()" method, the job can be done in a very reasonable time! The increase in speed that I experienced was in the order of 30x rather than 100x, but that's certainly more than enough to make me happy!

But that's raises another question (and this time, it's really a question about "perl wisdom" and not about "perl how-to"). Since both Text::CSV and Text::CSV_XS are object oriented, and since both implement the same methods, and since one is clearly faster than the other, why wasn't the code of the slower one simply replaced by the code of the faster one? In other words, why two different modules to do the same thing?

  • Comment on Re^2: Efficiency issue when reading large CSV files

Replies are listed 'Best First'.
Re^3: Efficiency issue when reading large CSV files
by Tux (Canon) on Jun 26, 2009 at 17:27 UTC

    Because Text::CSV_XS was there - in it's extended implementation - way before Text::CSV, which was a braindead pure-perl implementation. After some discussion the author of the current implementation and me, we decided that Text::CSV would best be implementing a wrapper of the two modules. Text::CSV_XS is, as the name already shows, an XS implementation, which needs an ANSI C compiler, which not everybody has. That is why there is a pure-perl implementation, as a fallback for those that need the functionality, but do not have compiling possibilities


    Enjoy, Have FUN! H.Merijn