in reply to Re: Line by line file compare w/ low system memory
in thread Line by line file compare w/ low system memory

Thanks for the replies - I'll try to be more specific. The data in the files is comma separated db data. I'm trying to find the lines in file1 that aren't in file2 and vice versa. Merlyn posted an excellent approach for doing this with a hash table in a previous thread here. I'm not having any problems with the current script's logic, just with the amount of time it takes to run and the memory limitations.
  • Comment on Re: Re: Line by line file compare w/ low system memory

Replies are listed 'Best First'.
Re: Re: Re: Line by line file compare w/ low system memory
by BrowserUk (Patriarch) on Feb 18, 2003 at 00:15 UTC

    The biggest problem with the hash approach is the storage requirement for the hashkeys. My suggestion would be to scan the first file and build a hash using a signature of the line as the key instead of the line itself, and record a pointer into the file (from tell) to the start of the line.

    The best method to derive the line signatures will depend very much on the nature of the data in the lines. Some possibles are

    1. crc32 (probably not good enough)
    2. MD5 currently under review i believe for security purposes but could be perfect for this application I think. you would be hashing on a 16-byte binary key as opposed to whatever your line length is. If they are 80 chars you cut them memory requirement for Merlyn's solution to 20%, plus a little for the filepointer, as little as 2 or 4-bytes if you packed them.
    3. MD4 similar to above but a little quicker to calculate I think.
    4. Perl's own hashing algorithm. You can find an implementation of it in perl in the sources for Tie::SubstrHash.
    5. A custom hashing algorithm tailored to your data. If it was all numeric, there is a modulo 99 algorithm similar to that used for CC number validation that might work. I'm not sure what effect spaces and commas might have upon it though.

    Anyway, once you have built your hash from the first file, you read the second line-by-line, compute the signature and test of it's in the hash. If it is, you read the one line back from the other file using seek and then verify that they match using a normal string compare.

    The key /sic/ will be in picking the right hashing algorithm. To help with that we'd need to see a sample or realistic mock-up of your data.


    Examine what is said, not who speaks.

    The 7th Rule of perl club is -- pearl clubs are easily damaged. Use a diamond club instead.

Re: Re: Re: Line by line file compare w/ low system memory
by steves (Curate) on Feb 18, 2003 at 00:11 UTC

    This is a time when you may want to use existing commands that do this sort of thing, at least to prime your Perl code. Two that come to mind on UNIX:

    1. The comm command does what you want if the files are sorted.
    2. You could prime the process with this:
      sort file1 file2 | uniq -u
      The output of that is just the lines that one has but the other doesn't. This means you only have to read one of the files if you do this sort of thing:
      local *IN; my %diffs; open(IN, "sort file1 file2 | uniq -u |") or die "Failed to open pipe to sort/uniq command: $!\n"; ++$diffs{$_} while (<IN>); close(IN) or die "Failed to close pipe: $!\n"; open(IN, "<file1") or die "Failed to open file1: $!\n"; print "In file1 only:\n"; while (<IN>) { if ($diffs{$_}) { print $_; delete $diffs{$_}; } } print "\nIn file2 only:\n", keys %diffs, "\n";