in reply to Re^2: evil code of the day: global stricture control
in thread evil code of the day: global stricture control

The problem is that the patch to load Carp lazily actually added more code to the core than if the author had left well enough alone. Carp itself is tiny.

  • Comment on Re^3: evil code of the day: global stricture control

Replies are listed 'Best First'.
Re^4: evil code of the day: global stricture control
by vkon (Curate) on Aug 02, 2007 at 20:53 UTC
    ... but your version always will read Carp.pm from disk, even when its not needed.

    We can discuss whether this optimization is premature or not, but sometimes avoiding Carp could be explained... somehow.

    Another example - when using DynaLoader elder Perl may times checked for $^O, like it could change during the run.

    There are Yin and Yang - some people stuck unneeded modules everywhere for the sake of imaginary improvement of code readability, others send optimization patches, because otherwise using innocent module will result in some tens of unneeded PMs loaded.
    It seems to me that if Ilya Zakharevich never send his optimization patches, then Perl could be as slow as Java...

      We can discuss whether this optimization is premature or not...

      No, we can't. I never saw a profile, neither before nor after. All I saw was a silly knee-jerk argument that "OMG CARP IS TEH SLOOOOOOOW!" and a bunch of patches that added code that perl has to read from the disk, compile, and keep around in memory. There's absolutely no debate possible about whether this was a premature optimization.

      Any "optimization" without measurement is premature.

      It seems to me that if Ilya Zakharevich never send his optimization patches, then Perl could be as slow as Java...

      I don't know what that has to do with this subject, but I've achieved several very measurable optimizations by removing code and almost none by adding code.

        No, we can't.

        that's my English... I meant "discuss we or not, no consensus will be reached"

        As for optimizations by removing code - I do not agree.

        Usually naive shorter implementation will be slower than "advanced" algorithm, like replacing bubble-sort with balanced tree will gain performance with a pay of complex algortihm.

        And here is typical example of other type of optimization by making code larger.

        in the code excerpt below:

        for (;length;) { read (FIN,$_,1_000_000); m0: $_ .= <FIN> unless eof FIN; if (/\\[\r\n]$/) { goto m0 unless eof FIN; } s#(/Title\s*)\(((?:[^\\\)]|\\.)+)\)#$1.repair_title($2)#eg; print FOUT; $len += length; print STDERR "$len\n"; }
        I was forced to replace processing by chunks of 1_000_000 because, when it was simple s/.../.../g, then quite often 1Gb of memory was eaten and still not reached result.