in reply to Huge files manipulation

If you want the output file to contain only the first instance of each key in the order found in the input file you could try processing the input file line by line. The script keeps track of the keys encountered in the %seen hash and only prints a record to the output file if it hasn't been seen before. If there are so many unique keys that this hash starts causing resource problems you could tie it to a disk-based DBM such as Berkeley DB or GDBM.

Given the input in your OP, this code

use strict; use warnings; my $inFile = q{spw722634.in}; open my $inFH, q{<}, $inFile or die qq{open: < $inFile: $!\n}; my $outFile = q{spw722634.out}; open my $outFH, q{>}, $outFile or die qq{open: > $outFile: $!\n}; my %seen = (); while( <$inFH> ) { my $key = join q{}, ( split m{\|}, $_, 8 )[ 0 .. 6 ]; print $outFH $_ unless $seen{ $key } ++; } close $inFH or die qq{close: < $inFile: $!\n}; close $outFH or die qq{close: > $outFile: $!\n};

produces an output file with these records

30xx|000009925000194653|00000000000000|20081031|02510|00000005445363|0 +1|F|0207|00|||+0005655,00|||+0000000000000,00 30xx|4150010003502043|CARDS|20081031|MP415001|00000024265698|01|F|1804 +|00|||+0000000000000,00|||+0000000000000,00

I hope this is the sort of solution you are aiming for and that you find this of use.

Cheers,

JohnGG