Hello monks
I've got a foreach loop like this:
OUTER: foreach my $line (<GZIP>) { INNER: for ( my $i = 0; $i < scalar @{ +$categories{$k}->{traces}}; $i++ ) { if ( $line =~ /^($categories{$ +k}->{traces}[$i]->{regex})/ ) { my @lista = split /;/, + $line; $A += $lista[${$catego +ries{$k}->{traces}[$i]->{calc}}[0]]; $B += $lista[${$catego +ries{$k}->{traces}[$i]->{calc}}[1]]; $C += $lista[${$catego +ries{$k}->{traces}[$i]->{calc}}[2]]; next OUTER; } } } close(GZIP);
Since the GZIP filehandle has millions of lines, I'm trying to understand if I can find an efficient way to parallelize the foreach loop, so that I can have, say, 5 $line processed together.
I haven't found a solution through the internet or docs, but probably it was me not searching for the correct thing.
Any suggestion will be appreciated. Thanks!
In reply to An efficient way to parallelize loops by Deus Ex
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |