in reply to CSV Diff Utility
Others have said this as well, but the UNIX utilities (sort , diff, sed) will make your life MUCH easier here.
I dealt with this problem a few years ago when I worked for a now-dead price-comparison site. We were getting CSV/TSV data dumps from online vendors daily, some of these files were 300+M in size (e.g. 500,000 books), and we only wanted what had changed from the previous dump.
Our system was a pretty complex perl app, with config files for each vendor that described what the format of the file was, how to clean it up (none of them delivered 100% clean CSV files), what column to sort on, etc.
The perl app didn't do any actual file processing itself - it was simply an easy way to handle config files and pass arguments to the various UNIX utils. It worked something like this:
This saved our bacon. We were drowning in data (about 5G/day, when our average server was a 400Mhz Pentium w/ 256M of RAM and 10G of storage), and only about 3-5% of the rows in any given file changed from the previous dump.
If your data is of any appreciable size, don't do the actual file-processing in perl, use the unix utils - it'll be much faster and more memory-efficient than anything you'll do in perl.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: CSV Diff Utility
by Limbic~Region (Chancellor) on Jun 23, 2004 at 16:56 UTC | |
by swngnmonk (Pilgrim) on Jun 23, 2004 at 17:23 UTC | |
by Limbic~Region (Chancellor) on Jun 23, 2004 at 18:53 UTC |