Yes, I figured someone might comment on my slow-as-molasses file-building code, but I only needed to run that part once, so I didn't bother trying to speed it up. :-)
Thanks for the back and forth on this; it's been very instructive. In nearly all cases, I'd say that "put your filtering file in a hash and process the other file against it" is such a superior algorithm that it's worth trying, even if you suspect it's going to force swapping to disk. But your solution for processing it in chunks was nice for situations where that just isn't possible.
Aaron B.
My Woefully Neglected Blog, where I occasionally mention Perl.
In reply to Re^9: Indexing two large text files
by aaron_baugher
in thread Indexing two large text files
by never_more
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |