http://qs1969.pair.com?node_id=378387


in reply to code execution speed and variable name length

I haven't benchmarked this, so you should take this with a grain of salt. But, I would be extremely surprised if you found more than a 1% difference in execution time for any non-trivial application.

------
We are the carpenters and bricklayers of the Information Age.

Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

I shouldn't have to say this, but any code, unless otherwise stated, is untested

Replies are listed 'Best First'.
Re^2: code execution speed and variable name length
by diotalevi (Canon) on Jul 29, 2004 at 15:05 UTC
    1% is not detectable with standard tools like Benchmark. How big would the difference have to be before it could be considered detectable?
      I would say that anything under 5% is noise and is almost always safely ignorable. (Obviously, 1% for Amazon, EBay, or Google isn't ignorable, but I don't think any of us have that kind of demand.) Of course, this is a rule of thumb with the appropriate caveats about such.

      Furthermore, I don't think benchmarking variable name length isn't worthwhile, for the simple reason that Perl itself is a poor choice for execution speed. If you have optimized your Perl code to the point that the only thing you can think to do is reduce the size of your variable names and you still need to optimize further, then you need to look at rewriting some of your Perl in XS and/or C. If that doesn't help, rewrite the rest of your Perl in C and optimize some of your C to ASM. If that doesn't work, you need bigger servers.

      In other words, micro-optimizations like that are cases of premature optimization and should be avoided. But, then again, you already knew that. :-)

      ------
      We are the carpenters and bricklayers of the Information Age.

      Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

      I shouldn't have to say this, but any code, unless otherwise stated, is untested

        I was just thinking along a tangent. The hashing speed of differing size strings is not something I'm interested in at all. Someone else brought up Benchmark and it raises the question of how much difference would have to be observed before it could be considered significant. That's all I was thinking.

        Total aside, but I must.

        (Obviously, 1% for Amazon, EBay, or Google isn't ignorable, but I don't think any of us have that kind of demand.)

        You are my kind of optimist! The bureaucracy that goes with a large successful company means that 1% is often lost in a loop cumulatively until you have rather large losses which no one can address without cooperation, thus no one addresses them. I only wish I were speaking hypothetically about one of the three named.