Use compiled regular expressions as the hash keys. This stops you from rebuilding the regex on every iteration. Alternatively, memoize buildRegexp. Or Tie::RegexpHash might be useful.
You can eliminate a for loop and the long ||'ed regexp by changing the data structure. Don't group the subset names in an array to be used as the key. Put each subset name in the hash as a single key but have the value of the hash as a reference to the array. Equivalent subset names point to the same array.
my %lookingFor; my $ref = [1, 2, 3]; $lookingFor{qr/a.*blah/o} = $ref; $lookingFor{qr/the.*blee/o} = $ref; #Tie::RegexpHash would let you do $lookingFor{$bigfileline} #without looping over keys() at all. No idea if it's faster.
The bottleneck is probably IO, but if it's not a more sophisticated data structure might help. You're not using the hash as a hash, but really as an array of arrays, i.e. an associative list. Is %lookingFor really big? If it is you might figure out a hierarchy and make it into a tree so you don't have to search linearly.
In reply to Re: how to parse large files
by kingkongrevenge
in thread how to parse large files
by domcyrus
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |