dchandler has asked for the wisdom of the Perl Monks concerning the following question:

Hello,

I am a research assistant and decided to learn perl so that I can better collect data. One recent task that has perplexed me is how to check the validity of data i "ripped" from a webpage. I just figured out how to parse data and have made 1000s of CSV files. I worry though that the website might have an imperfect mechanism of generating replies to queries.

Here's what I want to do. I want to check whether all of my files are different. The problem is... there are about five thousand of them.

How can I write a perl script that would open one up, read the first line of data and then compare it to the first line of every other file. How can i do this without comparing.

20040819 Edit by broquaint: Changed title from 'question about reading massive number of text files'

  • Comment on Verifying data in large number of textfiles

Replies are listed 'Best First'.
Re: Verifying data in large number of textfiles
by Zaxo (Archbishop) on Aug 18, 2004 at 01:22 UTC

    Better would be to get an md5 digest of each file. Very easy with perl 5.8 and PerlIO::via::MD5,

    use PerlIO::via::MD5; my %digested; for (glob '/path/to/*.csv') { open my $fh, '<:via(MD5)', $_ or warn $! and next; my $sum = <$fh>; exists $digested{$sum} and unlink($_), next; $digested{$sum} = $_; }
    That takes care of deleting duplicates as you go. The file to survive among duplicates is the first one seen.

    After Compline,
    Zaxo

Re: Verifying data in large number of textfiles
by BrowserUk (Patriarch) on Aug 18, 2004 at 00:57 UTC

    You might try using a diff utility, but on 5000 files that a lot of comparisons. However, most diff's do have options to allow you to ignore some whitepace and other "inconsequential" differences.

    You could create an Digest::MD5 digest for each file and the compare those. If the files are identical, the md5s will be also. However, even small whitespace differences will change the outcomes, which may be too sensitive for your purpose.

    I think I would try preprocessing the files to strip all the whitespace, and then produce an md5 from the results. That probably still doesn't guarentee that the files aren't the same data sets with just variations in formatting (eg. 10.0 -v- 10 -v- 1e1 etc), but it should reduce the possibilities.

    Beyond that, a lot depends on the size, nature and format of the data and the reliability of your ripping process.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
      is an md5 digest like a fingerprint for each file? that too might be useful...

        Yes. A bit like a checksum, but being 128 bits, it's reasonably safe to assume that if the generated numbers are the same, the data from which they are generated is also.

        Note: Reasonably safe means "Not guarenteed", but for your application it's perfect as you only need to manually compare those files generating the same signature. If they are indeed the same, then you can discard one of them.

        (Incidently, you ever find two substantially different files that generate the same md5, it would be interesting to see them. :)

        The problem with this, as I mentioned, is that even inconsequential differences, like trailing whitespace, will get you different md5s. Hence the suggestion to strip the whitespace before generating the md5s.

        If the data contains numbers, you might want to "normalise" those to some consistant format (using sprintf for example). Likewise, if there is any chance that text may sometimes be identical except for case, you could normalise that to all lower or upper.

        In the end, you get 5000 (big) numbers. Stick them in a hash, checking for their previous existance first. Any duplicates and you have found what your looking for.


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "Think for yourself!" - Abigail
        "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
Re: Verifying data in large number of textfiles
by gaal (Parson) on Aug 18, 2004 at 04:40 UTC
    There are great replies above, but note that hash collisions *could* occur even in legitimate data.

    If you're worried about correctness, I suggest the following for each file:

    • Normalize the data
    • Compute a hash (MD5, SHA-1, whatever)
    • If this is a new hash, insert $hashcode => [ $data ] into the seen dictionary.
    • If this hash *does* exist in the seen dictionary, make a full compare agaisnt all datums in @{ $seen{$hashcode} }. Reject input if it matches one of the existing datums; accept and push it to the dictionary if by (unlikely) coincidence it doesn't.
    I'd definitely recommend this more correct approach over the straightforward one if you need to do all this repeatedly. If you're just checking your imported data once-off, then ignoring this issue is probably fine (just keep it in mind).

    But now, I've a question: don't the result pages return some sort of ID for each query result? If so, and if you can't trust this ID to guarantee data uniqueness, you should not include it in the hash. Strip anything that's not hard data — less noise, better experiment.

Re: Verifying data in large number of textfiles
by ikegami (Patriarch) on Aug 18, 2004 at 01:37 UTC

    You could put the data in a database, adding a linenum and a filenum field if necessary. Then, all you'd have to do is:

    - foreach line $linenum - Compare the number of records returned by "SELECT * WHERE LINENUM=$linenum" to the number of records returned by "SELECT DISTINCT * WHERE LINENUM=$linenum". If they're different, there are duplicate records. - end

    The same approach can be taken without a database. It involves regrouping all the files so that line1.dat contains the first line of every original file, line2.dat contains the second line of every original file, etc. Pseudo-code:

    - foreach original file - $linenum = 1; - while not eof - append the line to file "line${linenum}.dat" - $linenum++; - end - end - foreach line file - Compare num of lines returned by 'cat line###.dat | sort | uniq' with the number of lines in line###.dat. If they're different, there are duplicate records. - end

    A completely different approach is to convert your CSV files to fixed-length field files. Then you can easily compare an arbitrary line in one file to the same line in another file by using seek().

Re: Verifying data in large number of textfiles
by SciDude (Friar) on Aug 18, 2004 at 01:40 UTC

    Using a consistent algorithm may provide you with a consistent set of identical "rips" from your webpage. Just for a moment, lets consider that unlikely reality to be true.

    You must combine the methods for parsing over all files in a directory with your comparison and sorting options.

    The first line may not give you the best indication for comparison. I would suggest Digest::MD5 instead, and the following untested code - mostly ripped from the docs:

    use Digest::MD5; use strict; %seen = (); $dirname = "/path/to/files"; # Parse over files in directory opendir(DIR, $dirname) or die "can't open $dirname: $!"; # Take a careful look at each file in $dirname while (defined($file = readdir(DIR))) { my $file = "$dirname/$file"; open(FILE, $file) or die "Can't open '$file': $!"; binmode(FILE); # make a $hash of each file my $hash = Digest::MD5->new->addfile(*FILE)->hexdigest, " $file\n" +; # store a copy of this $hash and compare it with all others seen unless ($seen{$hash}++ { # this is a unique file # do something with it here - perhaps move it to a /unique loc +ation } } closedir(DIR);
    ...code is untested

    SciDude
    The first dog barks... all other dogs bark at the first dog.
Re: Verifying data in large number of textfiles
by wfsp (Abbot) on Aug 18, 2004 at 05:36 UTC
    Perhaps do some extra work while you're parsing the data. As mentioned above, check white space, numbers, id, dates.

    Is there anything in particular that worries you? Which part of the data are you particularly interested in?

    I would consider generating additional reports.

    Also, if this would only be run occasionally I wouldn't consider 5000 files a problem. I would be tempted to get in and have a good look!

    As usual, it depends, but I think you could reassure yourself more on that first pass.