in reply to Re^3: Perl 5's greatest limitation is...?
in thread Perl 5's greatest limitation is...?

Simply saying Perl is slow sounds to me like "I didn't really try that hard".
I thought the Perl motto was "Making easy problems easy and hard problems possible", not "Bang your head against a wall for a long time, since theoretically you might be able to make it fast enough". Sure, you could write your own customized perl compiler and you might be able to squeeze out a little more performance, but it's a matter of economics. Sometimes you bite the bullet and use a language where it is easier to get close the the maximum performance your hardware allows. Maybe you should try optimizing some of the Perl programs over at the shootout. I'd really like to see a good perl implementation of the raytrace benchmark.
  • Comment on Re^4: Perl 5's greatest limitation is...?

Replies are listed 'Best First'.
Re^5: Perl 5's greatest limitation is...?
by themage (Friar) on Jul 31, 2005 at 22:05 UTC
    Hi,

    I have a small question... Those benchmarks, You need to implement the same algorithm or you can change the algorithm as long as the results are the same?

    Sure, it is an idiotic question, diferent algorithms are not benchmarkable together.

    But, for example, with the ackermann benchmark, there is after some analisis an small change very, very, very much more eficient, than the presented, based in a special case of that algorithm:

    In Ack($m,$n), with $m<3, the result can be calculed using $m*$n+$m+1.

    In this case, the result would be the same, but, even if you still need to use the recursive algorithm to find the results for $m>=3, it would reduce to an infime part the number of interactions taken to find the result.

      See the the faq...
      We are trying to show the performance of various programming language implementations - so we ask that contributed programs not only give the correct result, but also use the same algorithm to calculate that result.

      Doug Bagley used both same way (same algorithm) and same thing (same result) benchmarks - so in many cases the performance differences were simply better algorithms.

      After hearing many arguments, it seems to me that we should think of same way (same algorithm) tests as benchmarks, and we should think of same thing (same result) tests as contests.

      At present, we are only trying to show benchmarks.