Instead of slurping the first file into an array, you should consider a hash instead. This will improve your search time and also will get rid of your terribly inefficient way of removing the record from the array (see splice as an alternative to your method) by using delete. Additionally, using a hash will give meaningful names to columns. If you are forced to use an array, consider the constant pragma to make the indices more meaningful.
Cheers - L~R
Update: Here is a proof of concept
You should also consider using Text::CSV_XSuse constant COMP => 0; use constant USER => 1; use constant UNKN => 2; my %inventory; open(my $fh_inv, '<', 'Inventory.csv') or die "Unable to open 'Invento +ry.csv' for reading: $!"; while ( <$fh_inv> ) { chomp; my ($key, $company, $user, $unkn) = (split /,/)[0, 3, 1, 4]; $inventory{$key} = [$company, $user, $unkn]; } my @Data; open(my $fh_clients, '<', 'clients.txt') or die "Unable to open 'clien +ts.txt' for reading: $!"; while ( <$fh_clients> ) { chomp; my @field = split /,/; my $iref = $inventory{$field[3]}; if ( @$iref ) { push @Data, (join ',', $field[3], $iref->[COMP], @field[6,7,4, +11], @{$iref}[USER, UNKN]); } } #processing @Data after this
In reply to Re: Data parsing - takes too long
by Limbic~Region
in thread Data parsing - takes too long
by josephjohn
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |