in reply to Re: is this the most efficient way to double-parse a large file?
in thread is this the most efficient way to double-parse a large file?
By today's standards it's probably not excessively large, +/- 100MB each (although there could be cases where multiple files will be catenetated before processing). I was worried that a single hash with everything in it would be too large, but if 1/2 available memory is the rule of thumb I should be good. A DB is definitely overkill, as each dataset will likely only be processed once or twice and then discarded.
Thanks.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: is this the most efficient way to double-parse a large file?
by GrandFather (Saint) on Jan 21, 2014 at 22:03 UTC |