in reply to Re: Mysql, CSV, and flatfiles, a benchmark.... (The Code)
in thread Mysql, CSV, and flatfiles, a benchmark.
Definitely not a surprise - PSV (pipe-separated) files are definitely easier to generate/parse than CSV files (and you don't seem to be dealing with the problem of escaping pipes within your data). A more interesting comparison would be to attempt a 'join' between two extremely large PSV files, and compare that to MySql. I'll bet MySql will give you a better run for the money, and memory usage would be better.
We benchmarked Text::CSV a long time ago at my last job, as we were getting a lot of CSV files from the outside world. It's performance (topped out at 12-14K rows/minute on a p2-400 with 512M) caused us to re-architect our software to minimize the CSV data we were working on. It was quite a surprise at the time, but in retrospect, parsing CSV data can be quite problematic (especially when the damn users don't follow the rules!)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: Mysql, CSV, and flatfiles, a benchmark.... (The Code)
by merlyn (Sage) on May 07, 2001 at 20:42 UTC | |
|
Re (tilly) 3: Mysql, CSV, and flatfiles, a benchmark.... (The Code)
by tilly (Archbishop) on May 07, 2001 at 22:36 UTC |