Genralised benchmarks are nearly always useless. That is to say, producing a benchmark that tests a feature or set of features in isolation of a real application for it or them, produces a set of numbers that have little relevance to anything--other than the benchmark.
For example, it's not hard to produce a benchmark that shows that some string operations have gotten slower in recent versions of Perl. But what those numbers won't indicate is that those recent Perl's now support unicode operations. If you don't need unicode, that may seems detrimental; but if you do need unicode, the performance benefit of it being integral to Perl rather than an add-on (or worse, self-implemented) is huge.
Likewise, benchmarking assembler against Perl would (probably; I haven't tried it:), show that assembler is faster--but you will almost certainly not switch to assembler for performing most of the things that you would do with Perl. That's because once you consider the time taken to write the programs, as well as the time taken to run them, Perl wins hands down. At least until the program has been run hundreds, if not thousands of times. If the program has to run on two or more platforms, Perl's performance will look even better.
Benchmarking is only really useful for comparing different implementations of actual applications. Any other use only provides ammunition for endless and pointless debate:)
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
| [reply] [d/l] |
To add to BrowserUk's post, if benchmarks are truly important to you, benchmarking "the language" won't do you that much good, as any application you benchmark won't be very representative of your own. If you have only a few contending languages for your project, maybe you could write a simple prototype of your language in each and benchmark that. This will also expose you to the more subtle offerings and problems each language has.
| [reply] |
Benchmarking different versions of Perl (or rather the same script written in different versions of Perl) is easy: just install all these Perl-versions and run your script in each one and time its performance. It would make for a nice statistic, but its value would be next to useless. All it would show you was how fast a script ran under various versions of Perl. It would tell you nothing about Perl as a whole.I'm quite sure that you can write a Perl 4 script that outperforms a Perl 5.8 script. Should we therefore all turn back to Perl 4?
CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law
| [reply] |
There is a benchmarking program used by members of the p5p mailinglist. It's called 'perlbench' or something like that, and I think it's found on CPAN. Unlike what other posters in this thread might be saying, it can be very useful in determining whether some versions of Perl have problems with certain operations. But interpreting the numbers in the right way isn't easy. | [reply] |
You all make very good points. And BrowserUK clearly emphasizes my statement that benchmarking can be very tricky to do well. I think that I either did not state it clearly or I was mis-understood, but I meant a set of specific (not over-all or general) benchmarks that when combined could give a good, albiet not complete picture of Perls abilities.
I am relitively new to Perl, and I think that it would be good to be able to take a historical view of Perl, not just its functionality, but also its performance. This would give some interesting insight into the direction that Perl is moving.
I was just curious what others have found on this subject as far as resources. If there are not any such studies then maybe when my schedule frees up I will throw some thing together and share the results.
Thanks again everyone for your input.
Cameron | [reply] |
If it was any closer, it would bite you. | [reply] |