in reply to how to remove similiar duplicate elements in a file/array

Dave_PA,
Here is how I would do it as a general solution which is a balance of reduced complexity, IO, and memory consumption (untested).
#!/usr/bin/perl use strict; use warnings; use File::ReadBackwards; my @input = qw/sun.txt mon.txt tue.txt wed.txt thu.txt fri.txt sat.txt +/; my %seen; open(my $rev_out_fh, '>', 'output.rev') or die "Unable to open 'output +.rev' for writing: $!"; for my $file (reverse @input) { my $bw = File::ReadBackwards->new($file) or die "can't read '$file +' $!"; while (defined(my $line) = $bw->readline)) { my $id = substr($line, 8, 5); next if $seen{$id}++; print $rev_out_fh $line; } } open(my $out_fh, '>', 'output.txt') or die "Unable to open 'output.txt +' for writing: $!"; my $bw = File::ReadBackwards->new('output.rev') or die "can't read 'ou +tput.rev' $!"; while (defined(my $line) = $bw->readline)) { print $out_fh $line; } unlink 'output.rev'; # you may care if this fails
If everything fits into memory then this is probably unnecessary.

Cheers - L~R