in reply to Re: is this the most efficient way to double-parse a large file?
in thread is this the most efficient way to double-parse a large file?

By today's standards it's probably not excessively large, +/- 100MB each (although there could be cases where multiple files will be catenetated before processing). I was worried that a single hash with everything in it would be too large, but if 1/2 available memory is the rule of thumb I should be good. A DB is definitely overkill, as each dataset will likely only be processed once or twice and then discarded.

Thanks.

  • Comment on Re^2: is this the most efficient way to double-parse a large file?

Replies are listed 'Best First'.
Re^3: is this the most efficient way to double-parse a large file?
by GrandFather (Saint) on Jan 21, 2014 at 22:03 UTC

    1/2 could be almost any number. The reply was more to shake up your thinking a little to make you think more in terms of "let's try the simple way first". Remember: premature optimisation is the root of all evil.

    The important rule of thumb is: "If the code changes take longer than the run time saved, it's fast enough already".

    If the code changes take longer than the run time saved, it's fast enough already.