in reply to Re^2: CSV Diff Utility
in thread CSV Diff Utility

I'll admit, I skimmed your requirements, and focused on the problem of diffing data files rather than specifics.

That being said, you're combining two different problems:

The first is a complex problem, and if you're trying to do it in a simple fashion, you should probably use some kind of relational database to solve it. Others have already addressed that in previous postings

You never made mention of the size of the data or performance requirements. Comparing data files against eachother becomes a very expensive process as the files get large. You mentioned that you only want to process one line at a time for each file. How do you know you're on the right line? How do you know the file is sorted properly?

The solution we used worked for us, because we needed to know if *ANYTHING* changed. That, and it had to be lightweight and extremely fast. User-friendliness wasn't an issue, because all it had to do was generate a file of SQL statements.

The issue of user-friendliness is a totally separate problem. It sounds to me like your web-based UI should be a very simple application that presents sort/diff criteria to use user, and passes those criteria to your data-processing package. In your place, I'd use CGI::Application to create that lightweight front-end for the user.

While your other comments aren't relevant anymore, I feel like I should answer them in some part. sed was not being used to parse the CSV, only to re-format it into something that made life easier for sort.

Newlines were never an issue, because newlines aren't legal CSV. When newlines appeared, we complained to the vendor to get them fixed. If they didn't fix it, we made a note of the processing error and chucked the row.

sort takes delimiter arguments - that is how we were able to sort on an arbitrary field.

Whatever you end up doing, good luck - it's definitely a challenging problem.

Replies are listed 'Best First'.
Re^4: CSV Diff Utility
by Limbic~Region (Chancellor) on Jun 23, 2004 at 18:53 UTC
    swngnmonk,

    You never made mention of the size of the data or performance requirements.
    The file size is unknown - that's why I didn't mention it in the assumptions. There are no performance requirements which is why I would prefer functionality over speed.

    Comparing data files against eachother becomes a very expensive process as the files get large. You mentioned that you only want to process one line at a time for each file. (emphasis mine)
    No I didn't. I proposed that as the logic I was thinking of in my rough outline.

    How do you know you're on the right line?
    The logic explains that. The two files are sorted prior to comparison. Depending on the key field from each file I can tell if it is an add or delete or if I have to examine further to see if it is a modify.

    How do you know the file is sorted properly?
    The second assumption - There will be a key field in each record that will never change. The first step in my proposed solution is to sort by the key field ASCIIbetically.

    The issue of user-friendliness is a totally separate problem.
    It is not separate when the solution is prohibitive with regards to putting a user interface around it. As I said, I believe awk is up to the challenge of being able to pre-sort the records prior to processing but since each CSV source will be different, each awk solution will likely be different. If I can't find a way to easily automate that - then I can't make it user-friendly.

    Newlines were never an issue, because newlines aren't legal CSV
    You have an RFC or ISO standard you are quoting from? The non-authorative sources I have found have indicated that newlines are indeed allowed. It seems most CSV parsers out there handle newlines because several of the "big name" players use them.

    sort takes delimiter arguments - that is how we were able to sort on an arbitrary field
    Yes I know, but CSV can contain imbedded delimiters (which must be quoted) as well as imbedded quoted text:

    $ cat foo.txt Name, DOB, Gender "Smith, John", 111099 M "Smith, Sarah", 111199 F "Smith, April", 111299 F "Brown, Zach", 111399 M $ sort -t, -k2 foo.txt
    You can see that sort doesn't grok this kind of problem.

    This is a problem that requires a great deal of thought. While I appreciate your input - I wouldn't have posted it here if it was that simple. I have a team of really smart people working on it.

    Cheers - L~R