Re: Line by line file compare w/ low system memory
by CukiMnstr (Deacon) on Feb 17, 2003 at 23:15 UTC
|
As mentioned above, maybe the diff command might do what you need. You could try running it and capturing the output in your perl script, it has many options that control the output format.
You could also try Tie::File by fellow monk Dominus, a module that lets you treat a file like a regular perl array without loading the whole file to memory.
hope this helps,
| [reply] |
Re: Line by line file compare w/ low system memory
by Abigail-II (Bishop) on Feb 17, 2003 at 23:40 UTC
|
I'd use something like the following untested code:
open my $f => shift or die "open: $!";
open my $s => shift or die "open: $!";
while (<$f>) {
if (eof $s || <$s> ne $_) {print "They differ\n"; exit}
}
unless (eof $s) {print "They differ\n"; exit}
But since you don't specify what you mean by "comparing 2 files",
I must likely just wasted my time answering.
Abigail
| [reply] [d/l] |
Re: Line by line file compare w/ low system memory
by BrowserUk (Patriarch) on Feb 17, 2003 at 23:06 UTC
|
Hi Zoot. If you would read my reply to your earlier node here, and give us a litle more detail of the type of comparison you need, you'll probably get a better response.
I suggest that you give you answers to the questions in that node as a reply to either this one or your own top level node of this thread fro the best response.
Examine what is said, not who speaks.
The 7th Rule of perl club is -- pearl clubs are easily damaged. Use a diamond club instead.
| [reply] |
|
Thanks for the replies - I'll try to be more specific. The data in the files is comma separated db data. I'm trying to find the lines in file1 that aren't in file2 and vice versa. Merlyn posted an excellent approach for doing this with a hash table in a previous thread
here. I'm not having any problems with the current script's logic, just with the amount of time it takes to run and the memory limitations.
| [reply] |
|
The biggest problem with the hash approach is the storage requirement for the hashkeys. My suggestion would be to scan the first file and build a hash using a signature of the line as the key instead of the line itself, and record a pointer into the file (from tell) to the start of the line.
The best method to derive the line signatures will depend very much on the nature of the data in the lines. Some possibles are
- crc32 (probably not good enough)
- MD5 currently under review i believe for security purposes but could be perfect for this application I think. you would be hashing on a 16-byte binary key as opposed to whatever your line length is. If they are 80 chars you cut them memory requirement for Merlyn's solution to 20%, plus a little for the filepointer, as little as 2 or 4-bytes if you packed them.
- MD4 similar to above but a little quicker to calculate I think.
- Perl's own hashing algorithm. You can find an implementation of it in perl in the sources for Tie::SubstrHash.
- A custom hashing algorithm tailored to your data. If it was all numeric, there is a modulo 99 algorithm similar to that used for CC number validation that might work. I'm not sure what effect spaces and commas might have upon it though.
Anyway, once you have built your hash from the first file, you read the second line-by-line, compute the signature and test of it's in the hash. If it is, you read the one line back from the other file using seek and then verify that they match using a normal string compare.
The key /sic/ will be in picking the right hashing algorithm. To help with that we'd need to see a sample or realistic mock-up of your data.
Examine what is said, not who speaks.
The 7th Rule of perl club is -- pearl clubs are easily damaged. Use a diamond club instead.
| [reply] |
|
| [reply] [d/l] [select] |
Re: Line by line file compare w/ low system memory
by steves (Curate) on Feb 17, 2003 at 23:07 UTC
|
| [reply] |
Re: Line by line file compare w/ low system memory
by zoot (Initiate) on Feb 17, 2003 at 23:46 UTC
|
Thank you all for your speedy responses. I think CukiMnstr's Tie::File suggestion will do the trick - it's exactly what I has hoping for. I was only aware of Tie::DB_File. | [reply] |