I've looked high and low for some answers to my dilemma.
I need to search through a CSV file and find duplicate records, then write those records to another file.
A typical file is 30,000+ records, about 4.3 Mb of text.
I figured out how to use the following to remove the records, but I have to capture them so the charges can be reversed.
open(CDR, $CDR);
open(SORTED_CDR, ">$SORTED_CDR");
@ARRAY=<CDR>;
my @unique = do {my %h; grep {!$h {$_} ++} @ARRAY};
print SORTED_CDR @unique;