Okay. The reasons for the discrepancy between your figures and mine:
Your records average 52-chars each: ( 10+(0..30) ) *2 +2; giving 350MB/52 ~ 7 million records.
My records were 20 million x 17-chars:
C:\test>perl -le"printf qq[key%07d*value\n], $_ for 1 .. 20e6" >bigfil +e C:\test>dir bigfile 11/04/2012 04:34 370,000,001 bigfile
The size of a hash is directly proportional to the number of key/value pairs, and far less influenced by the actual size those keys & values:
C:\test>perl -F\* -anle"$h{$F[0]}=$F[1] }{ print `tasklist /nh /fi \"p +id eq $$\"`" bigfile perl.exe 3140 Console 1 4,509 +,624 K
NOTE: That figure is 4,509,624 K ie. 4.3GB.
The 3.8GB/4.3GB numbers I measured, are the memory acquired by the process in order to build the hash.
As hashes fill, they reach a point where the current number of buckets is inadequate to contain the number of keys; so a new hash is created with double the number of buckets and the old hash is copied to the new, before the hash can continue expanding. This doubling happens many times when building a large hash. The point at which the hash doubles in size is complicated as it depends upon not just the number of keys contained, but also the number of collisions in individual buckets.
But for sake of discussion, let's assume that the doubling occurs when the current bucket count -- always a power of 2 -- becomes 75% utilised. To hold 20 million key/value pairs requires a bucket count of 2**25 = 33 million. So the point when the previous bucket count needed to be doubled was 2**24 = 16 million. That means in order to build the hash with 20 million keys, the process had to have memory allocated to hold 33 + 16 = 49 million keys.
The effects of the doubling can be clearly seen in a time trace of cpu/memory and IO activity
By the time you measure the memory, the space allocated to the smaller hash has been returned to the runtime for reallocation -- but not the OS. And, not all of those key slots had values associated (in either hash), so they didn't use as much space as fully allocated key/value pair structures, but the buckets slots needed to be there any way.
The upshot is, that even if the OP had a more normal 32-bit memory limit of 2GB or 3GB, he wouldn't have enough memory to build the hash even though the final structure might just squeeze into ram.
A few further comments:
in part due to the space required to construct the hash as detailed above; and part due to the overhead of memory used by Devel::Size itself when performing the measurement. Run your ps before, as well as after using Devel::size to see the memory the latter uses in order to calculate the size of the hash.
The method you used to construct your data file -- stating the file after every write -- is sooo sloooow!
A faster alternative would be:
open my $out, '>', 'bigfile' or die $!; my $l = 0; while( $l < 350*1024*1024 ){ my $part1 = join '', map { ('A'..'Z','a'..'z',0..9)[rand(62)] } (0 +..(rand(30)+10)); my $part2 = join '', map { ('A'..'Z','a'..'z',0..9)[rand(62)] } (0 +..(rand(30)+10)); my $output = "$part1*$part2\n"; $l += length $output; print $out $output; } my $filesize = -s 'bigfile'; say 'File size: ', $filesize;
The first time I ran your code I thought my disk must have died it took so long:)
In reply to Re^8: Indexing two large text files
by BrowserUk
in thread Indexing two large text files
by never_more
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |