in reply to Removing duplicate lines based on a match

I see a couple of problems. Look here:

foreach $line (@lines) { my $id = $line =~ m/ID No: (\d+)/; if ($seen{$id}++){ # ...

You're going to get 1 for $id all the time because the match is in a scalar context instead of list context.

The other thing is you want to say "if ( ! $seen{$id}++ ) ...". The first time an $id is seen, $seen{$id}++ will be false. Every time after that, it will be true. Since you want it to be true once and false ever after, add the negation.

Here's the part I copied, rewritten:

foreach $line (@lines) { my ($id) = $line =~ m/ID No: (\d+)/; if ( ! $seen{$id}++){ # ...

I some other suggestions for you.

First and most important, check the value of open! You try to open your files, but if they fail, you'll never know. Also, it's a generally good idea to use lexical filehandles instead of the global ones you're using.

open my $in_fh, '<:utf8', 'input.txt' or die "Can't read 'input.txt': $!"; my @lines = <$in_fh>;

Second, you're doing everything in memory. If your input is huge, you could run out of memory. Instead of reading every line and then looping over them, consider using a loop that reads a line at a time instead. You'd also want to open your output files at the start and write into them during processing instead of collecting their eventual contents in (memory-based) arrays. Like this:

open my $in_fh, '<:utf8', 'input.txt' or die "Can't read 'input.txt': $!"; open my $purge_fh, '>>:utf8', 'purge.txt' or die "Can't append to 'purge.txt': $!"; open my $uniq_fh, '>:utf8', 'data.txt' or die "Can't write 'data.txt': $!"; my %seen; while ( my $line = <$in_fh> ) { my ($id) = ($line =~ /ID No: (\d+)/); if ( ! $seen{ $id }++ ) { print $uniq_fh, $line; $new_uniq++; } else { print $purge_fh, $line; } } close $in_fh or die "Close failed for input: $!"; close $purge_fh or die "Close failed for purge.txt: $!"; close $uniq_fh or die "Close failed for data.txt: $!";

Finally, if you don't already, Use strict and warnings!