in reply to CSV Diff Utility

Others have said this as well, but the UNIX utilities (sort , diff, sed) will make your life MUCH easier here.

I dealt with this problem a few years ago when I worked for a now-dead price-comparison site. We were getting CSV/TSV data dumps from online vendors daily, some of these files were 300+M in size (e.g. 500,000 books), and we only wanted what had changed from the previous dump.

Our system was a pretty complex perl app, with config files for each vendor that described what the format of the file was, how to clean it up (none of them delivered 100% clean CSV files), what column to sort on, etc.

The perl app didn't do any actual file processing itself - it was simply an easy way to handle config files and pass arguments to the various UNIX utils. It worked something like this:

This saved our bacon. We were drowning in data (about 5G/day, when our average server was a 400Mhz Pentium w/ 256M of RAM and 10G of storage), and only about 3-5% of the rows in any given file changed from the previous dump.

If your data is of any appreciable size, don't do the actual file-processing in perl, use the unix utils - it'll be much faster and more memory-efficient than anything you'll do in perl.

Replies are listed 'Best First'.
Re^2: CSV Diff Utility
by Limbic~Region (Chancellor) on Jun 23, 2004 at 16:56 UTC
    swngnmonk,
    I have to vehemently disagree with you. The one person who suggested utils was only trying to offload the sorting and left the rest of the "actual file-processing" to Perl. graff realized one limitation - imbedded newlines. I went on to list others.

    Use sed to clean up any potential issues
    What are you talking about?
    CSV is like HTML - it should be parsed not have a single regex applied to it. While it is possible to build a sed parsing script - it is a nightmare to maintain and does not come close to having the logic functionality of Perl.

    Use sort to re-order the file based on the unique-key column
    How exactly do you propose to do that? CSV is not fixed width and with imbedded quote characters being escaped it doesn't sound too easy. While I think awk might be up to the challenge, it seems that each set of CSV files would require a unique solution.

    diff the newly-generated file against the last one we processed
    I guess you didn't read the requirements section of my post. Being able to have the user specify columns to be ignored in the comparison, being able to select which columns are case sensitive, being able to get which individual fields are different and not just the entire record.

    Finally, I indicated that this had to be "user friendly" and never mentioned speed as a bonus let alone a requirement. I am quite familiar with *nix utilities and don't hesitate to use them when they are the "right tool" for the job. They can't make my "life MUCH easier here" if they fail to meet the requirements.

    L~R

      I'll admit, I skimmed your requirements, and focused on the problem of diffing data files rather than specifics.

      That being said, you're combining two different problems:

      • Being able to compare two sets of data based on arbitrary criteria
      • User-friendliness

      The first is a complex problem, and if you're trying to do it in a simple fashion, you should probably use some kind of relational database to solve it. Others have already addressed that in previous postings

      You never made mention of the size of the data or performance requirements. Comparing data files against eachother becomes a very expensive process as the files get large. You mentioned that you only want to process one line at a time for each file. How do you know you're on the right line? How do you know the file is sorted properly?

      The solution we used worked for us, because we needed to know if *ANYTHING* changed. That, and it had to be lightweight and extremely fast. User-friendliness wasn't an issue, because all it had to do was generate a file of SQL statements.

      The issue of user-friendliness is a totally separate problem. It sounds to me like your web-based UI should be a very simple application that presents sort/diff criteria to use user, and passes those criteria to your data-processing package. In your place, I'd use CGI::Application to create that lightweight front-end for the user.

      While your other comments aren't relevant anymore, I feel like I should answer them in some part. sed was not being used to parse the CSV, only to re-format it into something that made life easier for sort.

      Newlines were never an issue, because newlines aren't legal CSV. When newlines appeared, we complained to the vendor to get them fixed. If they didn't fix it, we made a note of the processing error and chucked the row.

      sort takes delimiter arguments - that is how we were able to sort on an arbitrary field.

      Whatever you end up doing, good luck - it's definitely a challenging problem.

        swngnmonk,

        You never made mention of the size of the data or performance requirements.
        The file size is unknown - that's why I didn't mention it in the assumptions. There are no performance requirements which is why I would prefer functionality over speed.

        Comparing data files against eachother becomes a very expensive process as the files get large. You mentioned that you only want to process one line at a time for each file. (emphasis mine)
        No I didn't. I proposed that as the logic I was thinking of in my rough outline.

        How do you know you're on the right line?
        The logic explains that. The two files are sorted prior to comparison. Depending on the key field from each file I can tell if it is an add or delete or if I have to examine further to see if it is a modify.

        How do you know the file is sorted properly?
        The second assumption - There will be a key field in each record that will never change. The first step in my proposed solution is to sort by the key field ASCIIbetically.

        The issue of user-friendliness is a totally separate problem.
        It is not separate when the solution is prohibitive with regards to putting a user interface around it. As I said, I believe awk is up to the challenge of being able to pre-sort the records prior to processing but since each CSV source will be different, each awk solution will likely be different. If I can't find a way to easily automate that - then I can't make it user-friendly.

        Newlines were never an issue, because newlines aren't legal CSV
        You have an RFC or ISO standard you are quoting from? The non-authorative sources I have found have indicated that newlines are indeed allowed. It seems most CSV parsers out there handle newlines because several of the "big name" players use them.

        sort takes delimiter arguments - that is how we were able to sort on an arbitrary field
        Yes I know, but CSV can contain imbedded delimiters (which must be quoted) as well as imbedded quoted text:

        $ cat foo.txt Name, DOB, Gender "Smith, John", 111099 M "Smith, Sarah", 111199 F "Smith, April", 111299 F "Brown, Zach", 111399 M $ sort -t, -k2 foo.txt
        You can see that sort doesn't grok this kind of problem.

        This is a problem that requires a great deal of thought. While I appreciate your input - I wouldn't have posted it here if it was that simple. I have a team of really smart people working on it.

        Cheers - L~R