Perl probably isn't as efficient at reading a file multiple times as you think it is. It's more likely the reason you didn't see much difference between reading it once and reading it twice is that you were running it on a decent operating system that was not memory-loaded, and so the first time you read it the OS kept the contents cached in memory, so the second time around it was read from memory rather than from disk.
Whenever you are opening a file more than once you should keep this in mind, because the test showing no speed improvement may change in the future, especially if you attempt to run the script on a shared server which has more of a memory crunch, and thus doesn't keep things in the disk buffer for as long.
How you benchmark it could also have a big impact on the results, as the bigger the 'big_subs' are, the less influence the relatively small impact of reading the file will be.
| We're not surrounded, we're in a target-rich environment! |
|---|
In reply to Re: A Luser's Benchmarking Tale
by jasonk
in thread A Luser's Benchmarking Tale
by Melly
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |