http://qs1969.pair.com?node_id=736582

~~David~~ has asked for the wisdom of the Perl Monks concerning the following question:

I would like to know if there is a more efficient way of doing the following, or recommendations on how to make this more robust, smaller, faster, etc... I have a list of up to 1M lines. Each line contains a DEFECT number, followed by up to 30 attributes.
DEFECTID ATTR1 ATTR2 ... ATTR30
I need to be able to select DEFECTIDs that match certain criteria, select a random sample of those, and then add them to a data set.
I currently have a rules hash that I read in from a local file that has the following structure: $hash->{RULENUMBER}->{RULETYPE} = value I want all the rules to be additive ( so if I get 10 defects from rule1, and 8 defects from rule2, my final set would contain 18 defects ). I also need to be able to negate the rule by adding an ! in the value
I have the following algorithm / code working, but I was wondering if it could be better ( one caveat is that I can't generalize and loop through some genereric rule because some are text rules, some are ranges, some are equivalencies, etc. ).
DEFECT: foreach my $defect ( @$list ){ # $summary is my pseudo object, # this line converts each defect line into a hash # whose keys are the attributes desired to be selected my $line = parseLine( $summary, $defect ); if ( ! $line ){ next DEFECT }; RULE: foreach my $rulenum ( keys %$rulelist ){ # one example rule, but there are many.. if ( defined $rulelist->{$rulenum}->{REGION} ){ my $rule = $rulelist->{$rulenum}->{REGION}; if ( $rule =~ s/!// ){ # handles negation if ( $line -> {REGIONID} == $rule ){ next RULE }; } elsif ( $line -> {REGIONID} != $rule) { next RULE }; } } # i create a filtered list in the $summary hash push @{$summary->{FILTEREDLIST}->{$rule}}, $defect; }
The problem is that I have a ton of rules, and looping through each rule for every defect seems to be overkill. I was just wondering if there is a more efficient way of doing this using some kind of parallel approach...
Any help would be greatly appreciated.

Update: Fixed typo pointed out by hbm