in reply to Should this be so slow ?

To expand a bit on Roy Johnson's suggestion, you could read the entire "cdr_details.1" file at the beginning, and build a hash-of-arrays (HoA): the hash is keyed by the "xdr" file names to be searched, and each hash element is an array of patterns to search for in that file.

Once the HoA structure is filled, loop over the hash keys (file names to open), and as you read each line from the current file, loop over the search patterns and output the current line if there's a match.

Something like this:

use strict; use warnings; my %search; open( LIST, "cdr_details.1" ) or die "cdr_details.1: $!"; while (<LIST>) { chomp; my @terms = (split /,/, $_, -1 )[0,1,2,6,7]; my $xdrfile = pop @terms; # file to search is last term push @{$search{$xdrfile}}, join( "\0", @terms ); # save remaining terms as a null-byte-separated string # multiple strings are pushed into an array for each xdr file } close LIST; open( OUT, ">cdr_tdm.csv" ) or die "cdr_tdm.csv: $!"; for my $xdrfile ( sort keys %search ) { my @findsets = @{$search{$xdrfile}}; open( XDR, $xdrfile ) or do { warn "$xdrfile: $!\n"; next }; while (<XDR>) { chomp; my $fldset = join( "\0", (split /,/, $_, -1 )[1,2,4,6] ); # $fldset is a null-byte-separated string that could match +findsets for my $findset ( @findsets ) { if ( $fldset eq $findset ) { print OUT; last; } } } close XDR; }
That will output lines in the order of the xdr file names that contain them, rather than the order of the list of fields to search for (i.e. the ordering in your cdr_details file).

If you want the "cdr_tdm" list sorted some other way, just sort that file after this script is done writing it. (The unix "sort" command is good for that, though a perl script to do the same thing would be pretty simple as well.)