Yeah, fair enough, I get your point. ;-)
To be a bit more specific, let me just outline the work done on performance. I'll split this work into two tracks.
Just about every week, someone from the core development team improves the speed of this or that specific feature (actually, often 3 or 4 features per week). Of course such a change does not improve the speed of programs not using that specific feature. But, with time, many features are improved and it becomes increasingly likely that your specific program will benefit from one of these feature performance enhancement.
Then, there is also more in-depth work going on on the Rakudo compiler / MoarVM optimizer (including JIT optimizing). This can significantly improve performance of almost any program running long enough for the real time optimizer to kick in. Needless to say, this type of thing is pretty complicated and needs heavy testing. As far as I know, very good results have already been achieved, but they haven't found their way yet into packaged production releases. So, at this point, you would probably need to download development programs and to build Rakudo / Moar VM and related environment to be able to test these improvements. I don't have any specific information, but I hope these enhancements will probably make their way into a production release relatively soon (but I don't know when).
You can get more detailed information on these subjects here: https://perl6advent.wordpress.com/2017/12/16/.
| [reply] |
it becomes increasingly likely that your specific program will benefit from one of these feature performance enhancement
We've all heard that a lot! Assuming a linear progression of improvement is unrealistic. If performance isn't better than Perl overall now -- with allegedly a better internal data model, a better VM, and a language that's easier to optimize -- where's that speed going to come from?
I'm sure a few people here remember the Parrot benchmarks of a decade ago that showed raw Parrot performance (PASM, PBC, and I believe PIR) was generally better than Perl performance, and that was without the sort of optimizations that could have been possible (escape analysis, unboxing, JIT).
Then, there is also more in-depth work going on on the Rakudo compiler / MoarVM optimizer (including JIT optimizing).
The last time I looked at Moar, it didn't look like it was designed for the sort of optimizations that people think of when they think of JITs like in JavaScript, Lua, or the JVM. When we were designing the optimized version of Parrot called Lorito, we looked at Squeak/Slang and JavaScript for examples, focusing on optimization possibilities such as unboxing, using primitive types where possible, avoiding memory allocations where possible, and (above all) not crossing ABI/calling convention boundaries you can't optimize across.
I could be wrong about all this -- I haven't looked at any of this code in any sort of detail in seven years -- but as long as the optimization strategy of Moar/NQP/Rakudo is "write more stuff in C because C is fast", it'll struggle getting to performance parity with Perl, let alone surpassing it. The fact that it's been years and the Rakudo stack is still four or five times slower than Perl does not give me much confidence that Rakudo will ever reach JavaScript levels of performance (let's be conservative and say it needs to be 20x faster for that) without yet another rewrite.
| [reply] |