As for pre-calculation, yes, I plan to do what can be done. There are a couple hundred queries that will be repeated often, as their results form the base line for statistical comparison of other queries. But there are 9-million possible 'other queries', each with equal probability of being asked. It's these I think maybe can be reduced to simple selects.
Perrin does, however, give me reason to pause. My fields are smallint, and no retrieval will be larger than 10,000 single-value records. Normally I'd be doing the intersection of a 10,000 element array with a ~4,000 element array of smallints. I have compound primary key as the sole index. So before I commit to heavily to the Perl solution, I'll try a few different database schemas and indices. Nice to nkow I have the Perl solution in my pocket, though.
Off I go ...
In reply to Re: Basic Perl array intersection faster than mysql query join.
by punch_card_don
in thread Basic Perl array intersection faster than mysql query join.
by punch_card_don
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |