in reply to Re: Re: Line by line file compare w/ low system memory
in thread Line by line file compare w/ low system memory

The biggest problem with the hash approach is the storage requirement for the hashkeys. My suggestion would be to scan the first file and build a hash using a signature of the line as the key instead of the line itself, and record a pointer into the file (from tell) to the start of the line.

The best method to derive the line signatures will depend very much on the nature of the data in the lines. Some possibles are

  1. crc32 (probably not good enough)
  2. MD5 currently under review i believe for security purposes but could be perfect for this application I think. you would be hashing on a 16-byte binary key as opposed to whatever your line length is. If they are 80 chars you cut them memory requirement for Merlyn's solution to 20%, plus a little for the filepointer, as little as 2 or 4-bytes if you packed them.
  3. MD4 similar to above but a little quicker to calculate I think.
  4. Perl's own hashing algorithm. You can find an implementation of it in perl in the sources for Tie::SubstrHash.
  5. A custom hashing algorithm tailored to your data. If it was all numeric, there is a modulo 99 algorithm similar to that used for CC number validation that might work. I'm not sure what effect spaces and commas might have upon it though.

Anyway, once you have built your hash from the first file, you read the second line-by-line, compute the signature and test of it's in the hash. If it is, you read the one line back from the other file using seek and then verify that they match using a normal string compare.

The key /sic/ will be in picking the right hashing algorithm. To help with that we'd need to see a sample or realistic mock-up of your data.


Examine what is said, not who speaks.

The 7th Rule of perl club is -- pearl clubs are easily damaged. Use a diamond club instead.

  • Comment on Re: Re: Re: Line by line file compare w/ low system memory