in reply to Data parsing - takes too long

josephjohn,
Unfortunately, you haven't given enough information to give a working solution. See this for an example of well defined requirements and assumptions. Here are some ideas though.

Instead of slurping the first file into an array, you should consider a hash instead. This will improve your search time and also will get rid of your terribly inefficient way of removing the record from the array (see splice as an alternative to your method) by using delete. Additionally, using a hash will give meaningful names to columns. If you are forced to use an array, consider the constant pragma to make the indices more meaningful.

Cheers - L~R

Update: Here is a proof of concept

use constant COMP => 0; use constant USER => 1; use constant UNKN => 2; my %inventory; open(my $fh_inv, '<', 'Inventory.csv') or die "Unable to open 'Invento +ry.csv' for reading: $!"; while ( <$fh_inv> ) { chomp; my ($key, $company, $user, $unkn) = (split /,/)[0, 3, 1, 4]; $inventory{$key} = [$company, $user, $unkn]; } my @Data; open(my $fh_clients, '<', 'clients.txt') or die "Unable to open 'clien +ts.txt' for reading: $!"; while ( <$fh_clients> ) { chomp; my @field = split /,/; my $iref = $inventory{$field[3]}; if ( @$iref ) { push @Data, (join ',', $field[3], $iref->[COMP], @field[6,7,4, +11], @{$iref}[USER, UNKN]); } } #processing @Data after this
You should also consider using Text::CSV_XS