in reply to Java vs. Perl file I/O benchmarks; valid test?
$file = <FILE>;
This is going to be handled by an opcode or two, that will run at C speed. On the other hand, the Java implementation is forced to use an explicit loop to fetch all the contents of the file. Other things being equal, this will kill Java.
What does really strike me, though, is the fact that the Java version is about three times as long. In terms of programmer efficiency, that's an important factor to take into consideration. Much more so that raw I/O throughput.
I'm no expert at understanding Perl's op codes, but if you do a perl -MO=Terse copyfile you can get a feeling for what is going on under the hood.
Dwelling on this overnight, it occurred to me that the java implementation is not following the spec "read a file in to a variable". The Perl implementation is slurping the entire file into a variable, which puts considerable effort on the VM manager. The Java implementation, on the other hand, is reading a bit from the input file, and then writing a bit to the output file, until the input file is exhausted.
At no time does the Java version hold an entire copy of the file in core, whereas Perl does, paying the added cost of system overhead while the kernel frantically discards 10 megabytes of pages used for caching and buffering, in order to give them to the Perl process.</>
A fairer comparison would be to use this code:
open(IN, $ARGV[0]); open(OUT, ">$ARGV[1]"); print OUT <IN>; close IN; close OUT;
Looking at it that way, it's clear the Perl programmer would have already typed in the program, corrected spurious syntax errors, run the program, received the results, before the Java programmer had finished typing in the Java program.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Re: Java vs. Perl file I/O benchmarks; valid test?
by danboo (Beadle) on Mar 01, 2002 at 14:01 UTC |