By today's standards it's probably not excessively large, +/- 100MB each (although there could be cases where multiple files will be catenetated before processing). I was worried that a single hash with everything in it would be too large, but if 1/2 available memory is the rule of thumb I should be good. A DB is definitely overkill, as each dataset will likely only be processed once or twice and then discarded.
Thanks.
In reply to Re^2: is this the most efficient way to double-parse a large file?
by jasonl
in thread is this the most efficient way to double-parse a large file?
by jasonl
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |