You'll actually find that storing the whole file in memory will cause disk swapping at some point, slowing down your process. If instead you read in a manageable chunk at a time, the process will run about as fast as possible, spending most of the time reading and matching, and no time writing virtual memory to disk.
Also, I don't know what your file format is, but m//gix doesn't necessarily do the right thing across newlines.
In general, programs have to be designed to work with the data. Only with experience can someone spew one of these off and expect it to work. (And with experience, if it doesn't work right away, both code and assumptions are checked for errors.)
-QM
--
Quantum Mechanics: The dreams stuff is made of
In reply to Re: Processing LARGE text files
by QM
in thread Processing LARGE text files
by Craig720
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |