in reply to Re^2: search array for closest lower and higher number from another array
in thread search array for closest lower and higher number from another array

Some thoughts on your benchmark!

Depending on the operating system you're working on, the actual results of your benchmark could be very different. If your benchmark was run on a *nix system, your first call to the subroutine &case1:

print "case1 finds ", &case1, " matches \n";

causes the file to be read and cached by the *nix system. I wasn't sure how perl would or would not benefit from this, but a *nix grep will be able to do pattern matching on the file in cached memory. grep doesn't even have to do a memory to memory copy/move. ( I didn't look at the code, so grep may be doing the memory to memory copy/move. )

I ran you're benchmark on an AIX system, and the results were basically the same as what you saw before. I then modified you're script to call &case2 first, and then &case1 and then &case0 (only once, Benchmark complained!) on a new and un-cached file. The result was that &case0 was the fastest, followed by &case1 and the slowest was &case2(grep). I ran this script on OpenSUSE with similar results. It does appear that perl does get benefit of the caching. If I ran the test again, grep was the winner!

If you're used a *nix system, I hope this gives some idea of why grep looked so much faster than perl.

Note: It's faster to work in memory than on disk.

Further note: You may have to restart the system to guarantee the file isn't cached already. I made this mistake the first time by using a large .gz file that I unzipped, which caused the zipped and unzipped files to be cached.

Thank you

"Well done is better than well said." - Benjamin Franklin

  • Comment on Re^3: search array for closest lower and higher number from another array
  • Download Code

Replies are listed 'Best First'.
Re^4: search array for closest lower and higher number from another array
by JavaFan (Canon) on Feb 05, 2011 at 23:00 UTC
    Sure, Unix (and I'd be amazed if there are modern OSses that don't do this) caches files. But it will do so regardless of the program that uses the file. It's not going to say, "Ooooh, this file is opened by a process called 'grep', I better cache the results, and here, this file is opened by dirty sticking little perl, I'm not going to keep that one around!".

      You may want to consider the different types of disk I/O sub-systems in modern operating systems. Most *nix systems have Raw I/O support, Direct I/O support, Concurrent I/O support, Modular I/O support, etc. For example, most databases use raw I/O. Performance of these applications is usually better when using raw I/O rather than using other I/O methods, because it avoids the additional work of memory copies, logging, and inode locks.

      My comment about perl was directed at which I/O subsystem perl was using, not that perl would be treated differently by that I/O sub-system.

      When writing *nix utilities ( like grep ), system programmers were encouraged to write "cache aware" programs. Whenever possible, work on the cached version directly, and avoid memory to memory move/copy. ( For clarification, I use "move/copy" because the operating system may perform a move rather than a copy, but this happens in the paging I/O sub-system, and has to do with paging performance. This is transparent to the application. )

      All I was trying to point out, was that a 500MB file on a test machine may be cached, but may not be cached on a production machine, and that a pure perl solution may very well be a better solution on a production machine. But that is the decision of the OP

      "Well done is better than well said." - Benjamin Franklin

        Well I ran 10 iterations on the benchmark as you saw. I have also done greps on fresh data files (which were certainly not cached) and the difference in speed was very small (~1%) compared with subsequent searches. I do believe grep is far faster than even running data files through an empty loop. I definitely wish there was a better pure Perl solution.

        This is of course because I'm using grep as a "dumb" tool to get context around the match. Then I feed this data into Perl where the true parsing is done (to remove irrelevant lines I don't wish to see). If I could do everything inside a Perl loop I would imagine it would be more efficient. In this case however Perl needs to find the line with the data header before the match, and continue after the match until the next header. I just haven't found a better way than "pre-searching" the file with grep. It's fast enough, but could it be faster? :D I'm turning into an efficiency addict now.