If both files are 500M you probably don't want to read them all into memory to work on them (unless you've got over a gig of RAM and/or swap handy). If you know that lines in the different files are always going to match up (line 25 in file one always corresponds to line 25 in file two):
open( ONE, "file1" ) or die "open file1: $!"; open( TWO, "file2" ) or die "open file2: $!"; open( OUT, ">merged" ) or die "open merged: $!"; while( <ONE> ) { chomp; my $two = <TWO>; print OUT $_, (split( " ", $two ))[-1], "\n" } close( ONE ); close( TWO ); close( OUT );
If you can't guarantee a one-to-one mapping between lines, then you probably want to look into using DB_File or the like to generate a hash on disk of the second file key'd by the first two columns (assuming that's what relates the values from the two different files). Then you'd read through the first file and pull the corresponding value from the hash.
In reply to Re: merging to databases... Should be easy...
by Fletch
in thread merging to databases... Should be easy...
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |