It can't work that way. The same string may not compile to the same optree, depending on how the environment around it has changed. For example, which one does my $x = new Foo (1, 2, 3); compile to?
- my $x = new(Foo(1,2,3));
- my $x = Foo->new(1,2,3);
Depending on if the Foo package has been loaded already, it can be either one. Thus, eval STRING will incur the compilation penalty every time.
Being right, does not endow the right to be rude; politeness costs nothing. Being unknowing, is not the same as being stupid. Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence. Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.
| [reply] [d/l] [select] |
I think the point was the reverse - if I load a line from a file, perl buffers a whole chunk. Then I "eval STR" multiple times before hitting the disk for the next chunk. Or the point was that we eval the same STR multiple times through the life of the program, but only read from disk once (for the given string). Either way, the single disk IO is something we can't avoid, but the multiple eval is.
Personally, if I were designing this from the ground up, I would mandate that all functions were actually package methods (that take the package name as the first argument). Then I could do something like this:
my ($package, $func) = extract_from_line($_);
$package->$func($_);
Doesn't get any faster than that. We're still using hashes here, but we allow perl to figure that out. The symbol table is just way too useful - and here we're getting it while still using strict.
Alternatively, without forcing package method semantics, you could bypass strict refs and access the symbol table directly. But, as that is a bit more complex and unreadable, I'll leave that for a future node if you need to go that way. | [reply] [d/l] |
Perl will buffer a chunk, but what happens if that chunk gets paged to disk? :-)
Being right, does not endow the right to be rude; politeness costs nothing. Being unknowing, is not the same as being stupid. Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence. Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.
| [reply] |
| [reply] |
That depends on a bazillion factors, a few of which are:
- your actual harddrive - some do onboard caching, some don't. Some have adapative search strategies, most don't.
- where it's located - is it in the case or on the network somewhere?
- how you're connected to it - megabit? dual-gigabit SAN? Serial? SCSI?
- what OS you're using - MS vs. Linux vs. Solaris vs. VMS vs. ...
- what filesystem you're using - ext3 vs. reiser3 vs. reiser4 vs. WinFS vs. ...
- Any number of options your sysadmin(s) might have set
- The rest of the load on the box / SAN / etc ...
- Any caching done by other applications - Oracle, for instance, has some aggressive caching
- How much RAM you have - it doesn't help your I/O speed if your cache is in a page which is on disk
In other words, the benchmark can vary even by time of day or day of the week or even day of the year. (If you have a bunch of quarterly reports that are run, that can affect your database load which affects your OS caching performance which affect your ... you get the picture.)
Being right, does not endow the right to be rude; politeness costs nothing. Being unknowing, is not the same as being stupid. Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence. Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.
| [reply] |