$ perl -MO=Deparse -e '$line = readline(*STDIN)'
$line = <STDIN>;
-e syntax OK
If you are really concerned about it, you should be using read() or sysread() instead.
-sauoq
"My two cents aren't worth a dime.";
| [reply] [Watch: Dir/Any] [d/l] [select] |
I would expect the fourth one is the slowest, since you're going out of your way to get a line. The first three are, I believe, identical.
But this is not where you should be looking for speed increases in your code. This is such a minor issue. You're over-optimizing.
| [reply] [Watch: Dir/Any] |
$ perl -MO=Deparse -e 'readline(STDIN)'
readline 'STDIN';
-e syntax OK
Not sure when that happens though. (One time penalty?)
I would expect the fourth to be about as quick as the others (after the open) as it's just dup'ing STDIN. Is it slower to store a filehandle in a lexical? (I wouldn't think so but have no real clue.)
shrug
Not that any of this changes the fact that it shouldn't matter to the OP. :-)
-sauoq
"My two cents aren't worth a dime.";
| [reply] [Watch: Dir/Any] [d/l] |
find out for yourself, using a static file, the Benchmark module and the exported timethese() subroutine. There are many examples of its usage on this site. | [reply] [Watch: Dir/Any] |
To be frank, the way you read does not make any difference in this context. In this case, the time taking a human being to response and type is much longer than the read takes.
| [reply] [Watch: Dir/Any] |
Of course, if the question is being asked in the context of reading STDIN from a pipe (rather than from a keyboard), then there could be a good reason to figure out whether one method of reading is faster or slower than another, and a benchmark test would be worthwhile -- even if all it does is prove that there's hardly any difference.
And in that regard, the Benchmark module probably isn't necessary or even appropriate; the unix "time" command would probably do. Just put together a suitable test script that reads and processes data from STDIN, but accepts a command-line option to determine what sort of syntax to use for reading, then run a series of commands like:
feeder_process | time test-perl-reader diamond > /dev/null
# (repeat several times, average the results)
feeder_process | time test-perl-reader readline1 > /dev/null
# (repeat, average the results)
feeder_process | time test-perl-reader readline2 > /dev/null
# (you know the drill...)
If the difference among the various averages is greater than the variance among test runs for any single method, then maybe there's a real difference in the efficiency of the different input methods.
But I would expect any differences to be a very small fraction of the overall pipeline time. | [reply] [Watch: Dir/Any] [d/l] |