I got this working, but the time took around 22-30 minutes.
Hm. Then you coded it wrong.
I just ran the following which read a 37MB file, extracts one word from the middle of each of the 380,000 lines and looks it up in a hash built from a 250,000 line/5MB file, and writes the matching/non-matching lines to two different files.
No tricks and no modules. It use ~13MB of ram and takes under 4 seconds.
Of course it doesn't do exactly what you want to do, but you expected to have to do some of the work yourself didn't you?
#! perl -sw use 5.010; use strict; my %smallBits; open SMALL, '<', '758205.small' or die $!; $smallBits{ (split)[ 0 ] } = $_ while <SMALL>; close SMALL; open GOOD, '>', '758205.good' or die $!; open BAD, '>', '758205.bad' or die $!; open BIG, '<', '758205.big' or die $!; while( <BIG> ) { chomp; my $substr = (split)[ 5 ]; if( exists $smallBits{ $substr } ) { say GOOD "$_ : $smallBits{ $substr }"; } else { say BAD $_; } } close BIG; close GOOD; close BAD; __END__ [15:41:05.05] C:\test>758205.pl [15:41:08.85] C:\test>dir 758205.* Volume in drive C has no label. Volume Serial Number is 8C78-4B42 Directory of C:\test 17/04/2009 15:41 16,855 758205.bad 17/04/2009 15:26 37,200,976 758205.big 17/04/2009 15:41 46,067,745 758205.good 17/04/2009 15:40 529 758205.pl 17/04/2009 15:28 5,422,408 758205.small 5 File(s) 88,708,513 bytes 0 Dir(s) 419,386,847,232 bytes free [15:41:35.14] C:\test>wc -l 758205.small 266000 758205.small [15:43:32.02] C:\test>wc -l 758205.big 380000 758205.big
In reply to Re: Managing huge data in Perl
by BrowserUk
in thread Managing huge data in Perl
by soura_jagat
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |