in reply to Re: Re: Re: Re^2: Are two lines in the text file equal (!count)
in thread Are two lines in the text file equal
Update: Ignore this, my error. I should have checked the return code from tie. If the tie fails, it just creates an in memory hash. The extra memory used is just the overhead of loading of the module.
I'm probably doing something wrong, but I just tried the following code to detect duplicates in my 80 MB file (1_000_000 lines x 80 chars) and it took close to 1/2 an hour to hash the whole file.
#! perl -slw use strict; use DB_File; tie %h, 'DB_File', 'test.db'; open IN, '<', 'test.dat' or warn $!; print scalar localtime; $h{ $_ } .= ' ' . $. while $_ = <IN>; print scalar localtime; exit; __END__ Thu Nov 13 20:55:30 2003 Thu Nov 13 21:23:31 2003
That wasn't much of a surprise, but the fact that it consumed 190 MB of memory doing so was, as this is considerably more than building a straight hash in memory.</strike
Is there some way of limiting the memory use?
|
|---|