yperl has asked for the wisdom of the Perl Monks concerning the following question:
Each time I visit this site,
http://shootout.alioth.debian.org/
I'll be angry.
Perl is all the time at the end of the score list. Even for regular expressions, its badly scored.
Could we try to change the algorithms to get our prefered language on the top??
Regards,
Younès
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: The Great Computer Language Shootout
by tilly (Archbishop) on Sep 21, 2004 at 16:39 UTC | |
To take the specific item that you complain about, I happen to know that Perl's regular expression engine could easily be sped up by removing a sanity check for pathological regular expressions. The result would be to speed up a lot of programs by an unnoticable amount, at the cost of making some pathological ones will surprise by taking a few billion years to finish. That change might make Perl look good on a benchmark, but would result in more bug reports. Do you really want that change? Furthermore other areas of slowness are due to unavoidable design considerations. For instance Perl is a highly dynamic interpreted language. That is just never going to be as fast as a static compiled language. Which matters more to you, performance or programming convenience? If it is raw performance, then you're probably using the wrong language. However I have good news for you. The Parrot project is creating a new version of Perl, and is very concerned with performance considerations. If you want to be of assistance, you could try implementing the shootout test suite in Parrot byte-code, submit that to the project, and identify specific performance issues that you uncover. | [reply] |
by Elian (Parson) on Sep 21, 2004 at 17:32 UTC | |
However I have good news for you. The Parrot project is creating a new version of Perl, and is very concerned with performance considerations. If you want to be of assistance, you could try implementing the shootout test suite in Parrot byte-code, submit that to the project, and identify specific performance issues that you uncover.While I'm all up for performance shakeouts, one thing to keep in mind is that this version of the shootout needs Debian-installable packages of the languages in question to add them to the test. (So if someone wants to build a debian installer target for the Makefile... :) | [reply] |
by hardburn (Abbot) on Sep 21, 2004 at 18:20 UTC | |
And if you do make a Parrot .deb for the shootout, don't forget to put in the optimizations. Since Parrot is still heavily under development, optimizations are turned off in the builds by default to help with debugging. "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni. | [reply] |
|
Re: The Great Computer Language Shootout
by hardburn (Abbot) on Sep 21, 2004 at 16:15 UTC | |
"There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni. | [reply] |
|
Re: The Great Computer Language Shootout
by FoxtrotUniform (Prior) on Sep 21, 2004 at 19:18 UTC | |
I just recalculated the rankings with CPU weight 0.1, memory weight 0.01, and LOC weight 1. Perl came out sixth. (OCaml came out first and fourth, Pike second, Haskell third (and, er, last), and Ruby fifth.) My theory here is that a programmer with a good grasp of a given language writes lines of code at a fairly constant rate. (It's often argued that bug count is linear in lines of code, so I'm going out on a limb and assuming that speed of hacking is similar. My own experience confirms this.) Why is programmer speed important? It's really cheap and easy to upgrade your memory; costs about two hundred bucks for a gig of good RAM, and takes maybe five minutes to install it. It's relatively cheap and easy to upgrade your processor -- maybe seven hundred dollars for a nice Athlon 64 and quality motherboard to go with it. It's really hard to upgrade your programmer. If you need raw development speed, you're better off with a language that you can program quickly in than one that takes maximal advantage of hardware that you can readily upgrade anyway. This isn't always the case; for some applications (games, for instance) code speed is critical. However, for most business tasks -- by which I mean the kind of one-off or in-house programs that a lot of us write -- nobody's going to notice a five-second performance difference, but everyone's going to care about a five-week delivery delta. -- | [reply] |
|
Re: The Great Computer Language Shootout
by ikegami (Patriarch) on Sep 21, 2004 at 16:34 UTC | |
Check the programs above Perl. They are either: Given that -- without even counting the other benefits of Perl not illustrated -- I'm quite happy with the results. | [reply] |
|
Re: The Great Computer Language Shootout
by davido (Cardinal) on Sep 21, 2004 at 16:13 UTC | |
Can't place much stock in a website that when I try to download the shootout results, the PHP script that is supposed to handle the shootout download errors out:
People can devise tests that favor any agenda. Your most significant test should be your own productivity. Dave | [reply] [d/l] |
|
Re: The Great Computer Language Shootout
by herveus (Prior) on Sep 21, 2004 at 16:28 UTC | |
I just looked. Perl ranked about one third of the way down the list, right around Java, Python, etc. when CPU was the only consideration. Making lines of code and memory of equal weight, Perl ranked ninth or so...diddle the weights yourself and see what comes out. The books are there to be cooked. Don't worry. Be happy.
yours, Michael | [reply] |
|
Re: The Great Computer Language Shootout
by jdporter (Paladin) on Sep 21, 2004 at 21:14 UTC | |
| [reply] |
|
Re: The Great Computer Language Shootout
by CountZero (Bishop) on Sep 21, 2004 at 19:52 UTC | |
Seriously, do they take into account the time to compile the program as well? Or the time to write and debug the code? If you include a module (or header file, or whatever, ...) does it count as 1 line or as as many lines as this module, ... contains? CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law | [reply] |
|
Re: The Great Computer Language Shootout
by TedPride (Priest) on Sep 21, 2004 at 20:26 UTC | |
| [reply] |
by BUU (Prior) on Sep 21, 2004 at 21:54 UTC | |
| [reply] |
|
Re: The Great Computer Language Shootout
by perlfan (Parson) on Sep 22, 2004 at 14:29 UTC | |
| [reply] |
|
Re: The Great Computer Language Shootout
by Your Mother (Archbishop) on Sep 22, 2004 at 07:05 UTC | |
Others could poke better holes but here's my stab, assuming I'm using Benchmark correctly which I'm never sure about. Their random number generator perl versus perl's vanilla rand(). Off by between 200% and 1,500%.
And if you turn it down to their default of a single run/generation:
(Update: I know they're doing "the same algorithm" for each; just seems wrong to feed a cat dog food.) | [reply] [d/l] [select] |
|
Re: The Great Computer Language Shootout
by Beechbone (Friar) on Sep 22, 2004 at 11:31 UTC | |
...but some of the Perl snippets are really bad. I took the time to speed up the 'hash2' by about 15%:
Search, Ask, Know | [reply] [d/l] |
|
Re: The Great Computer Language Shootout
by gmpassos (Priest) on Sep 23, 2004 at 01:59 UTC | |
Table generated at http://shootout.alioth.debian.org/craps.php?xcpu=1&xmem=1&xloc=1 (with 1 score multiplier for CPU, Mem and Lines): Formated with
Graciliano M. P. | [reply] [d/l] [select] |
|
Re: The Great Computer Language Shootout
by FoxtrotUniform (Prior) on Oct 10, 2004 at 20:23 UTC | |
On the same subject, here's an older article by Peter Norvig that might interest you. -- | [reply] |
by jplindstrom (Monsignor) on Oct 11, 2004 at 11:19 UTC | |
http://userpages.umbc.edu/%7Ebcorfm1/C++-vs-Lisp.html /J | [reply] |
|
Re: The Great Computer Language Shootout
by QM (Parson) on Sep 23, 2004 at 18:59 UTC | |
Why not start your own site which adds the criteria for Development Time, possibly broken down into Coding Time and Debugging Time? This should be weighted something like 10% or 1% of the CPU weighting (insert arguments for and against here). And the various points about using native idioms to solve the same problem lead me to think that a Native Idiom score would be telling as well. Sure, by forcing all entries to avoid native idiom advantages, it shows what the performance of a generic, doesn't-fit-into-a-native-idiom algorithm would do. But it also avoids showcasing what programming areas each language excels at (performance wise). Add in the native idiom measure, and development time becomes apparent and interesting too.
-QM | [reply] |