Update: Ignore this, my error. I should have checked the return code from tie. If the tie fails, it just creates an in memory hash. The extra memory used is just the overhead of loading of the module.
I'm probably doing something wrong, but I just tried the following code to detect duplicates in my 80 MB file (1_000_000 lines x 80 chars) and it took close to 1/2 an hour to hash the whole file.
#! perl -slw
use strict;
use DB_File;
tie %h, 'DB_File', 'test.db';
open IN, '<', 'test.dat' or warn $!;
print scalar localtime;
$h{ $_ } .= ' ' . $. while $_ = <IN>;
print scalar localtime;
exit;
__END__
Thu Nov 13 20:55:30 2003
Thu Nov 13 21:23:31 2003
That wasn't much of a surprise, but the fact that it consumed 190 MB of memory doing so was, as this is considerably more than building a straight hash in memory.</strike
Is there some way of limiting the memory use?
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
Hooray!
Wanted!
|