in reply to A Luser's Benchmarking Tale

Perl probably isn't as efficient at reading a file multiple times as you think it is. It's more likely the reason you didn't see much difference between reading it once and reading it twice is that you were running it on a decent operating system that was not memory-loaded, and so the first time you read it the OS kept the contents cached in memory, so the second time around it was read from memory rather than from disk.

Whenever you are opening a file more than once you should keep this in mind, because the test showing no speed improvement may change in the future, especially if you attempt to run the script on a shared server which has more of a memory crunch, and thus doesn't keep things in the disk buffer for as long.

How you benchmark it could also have a big impact on the results, as the bigger the 'big_subs' are, the less influence the relatively small impact of reading the file will be.


We're not surrounded, we're in a target-rich environment!

Replies are listed 'Best First'.
Re: Re: A Luser's Benchmarking Tale
by Melly (Chaplain) on Nov 25, 2003 at 15:03 UTC

    Good point regarding the caching (and one I hadn't considered). I'd taken in the point about 'big_sub' (hence the name ;)

    Still, when all's said and done, and even if caching wasn't an issue, I don't think I could bare to read a file more times than is strictly necessary - call it an aesthetic prejudice ;)

    Tom Melly, tom@tomandlu.co.uk