I did not say that function call overhead is peanuts in general. I said that the overhead for 100,000 calls is peanuts, and I should perhaps have added, in that specific context.
Just a quick test:
$ time perl -e '$e = shift;sub inc{$c=shift; $c++;return $c;} $d=0; $d
+=inc($d) while $d < $e; print $d;' 1e3
1000
real 0m0.073s
user 0m0.030s
sys 0m0.030s
Laurent@Laurent-HP ~
$ time perl -e '$e = shift;sub inc{$c=shift; $c++;return $c;} $d=0; $d
+=inc($d) while $d < $e; print $d;' 1e4
10000
real 0m0.065s
user 0m0.046s
sys 0m0.015s
Laurent@Laurent-HP ~
$ time perl -e '$e = shift;sub inc{$c=shift; $c++;return $c;} $d=0; $d
+=inc($d) while $d < $e; print $d;' 1e5
100000
real 0m0.108s
user 0m0.046s
sys 0m0.046s
Laurent@Laurent-HP ~
$ time perl -e '$e = shift;sub inc{$c=shift; $c++;return $c;} $d=0; $d
+=inc($d) while $d < $e; print $d;' 1e6
1000000
real 0m0.496s
user 0m0.451s
sys 0m0.030s
Laurent@Laurent-HP ~
$ time perl -e '$e = shift;sub inc{$c=shift; $c++;return $c;} $d=0; $d
+=inc($d) while $d < $e; print $d;' 1e7
10000000
real 0m4.320s
user 0m4.274s
sys 0m0.030s
A tenth of a second for 100,000 calls on my laptop, I think it can be considered as peanuts (in fact, half of the time is really overhead for compile time, etc., it is more like about 0,04 sec. for the 100,000 calls, as shown by the calls with 1e6 and 1e7). Or less than half a microsecond per function call in this case.
If that level of optimization is required, then, maybe, Perl is not the right language, changing the programming language should be considered. But the context given by the OP indicates that such a level of optimization is most probably not required.
I am dealing daily with huge amounts of data (tens of gigabytes), and time optimization is one of my major concerns. If my program runs in, say, 5 hours, I am not interested in shortening its duration by 40 seconds through removing 100 million function calls. I am far more interested in finding a better way of doing the processing that will perhaps get that duration down to 1 hour, or possibly much less than that. And quite often, it can be done, for example by simply adding a simple hash that will enable me to filter out most of my data early in the processing.
I made a talk on this subject in a Perl Mongers conference in France last month and gave some examples of just that type of optimization. My point in that talk was: why do I use (mostly) Perl if performance is one of my major goals? C would be faster, wouldn't it? Well, because I am not really interested in microsecond optimizations. I am far more interested in finding better ways of doing things, figuring out better algorithms, and Perl gives me the expressive power to do it quickly: where I need two hours to write and test a better (faster) algorithm in Perl, I may need two days or perhaps two weeks to do it in C or in Java, so that I would probably not even be given the budget to do it.
|