Definitely not a surprise - PSV (pipe-separated) files are definitely easier to generate/parse than CSV files (and you don't seem to be dealing with the problem of escaping pipes within your data). A more interesting comparison would be to attempt a 'join' between two extremely large PSV files, and compare that to MySql. I'll bet MySql will give you a better run for the money, and memory usage would be better.
We benchmarked Text::CSV a long time ago at my last job, as we were getting a lot of CSV files from the outside world. It's performance (topped out at 12-14K rows/minute on a p2-400 with 512M) caused us to re-architect our software to minimize the CSV data we were working on. It was quite a surprise at the time, but in retrospect, parsing CSV data can be quite problematic (especially when the damn users don't follow the rules!)
In reply to Re: Re: Mysql, CSV, and flatfiles, a benchmark.... (The Code)
by swngnmonk
in thread Mysql, CSV, and flatfiles, a benchmark.
by illitrit
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |