in reply to Re^4: evil code of the day: global stricture control
in thread evil code of the day: global stricture control

We can discuss whether this optimization is premature or not...

No, we can't. I never saw a profile, neither before nor after. All I saw was a silly knee-jerk argument that "OMG CARP IS TEH SLOOOOOOOW!" and a bunch of patches that added code that perl has to read from the disk, compile, and keep around in memory. There's absolutely no debate possible about whether this was a premature optimization.

Any "optimization" without measurement is premature.

It seems to me that if Ilya Zakharevich never send his optimization patches, then Perl could be as slow as Java...

I don't know what that has to do with this subject, but I've achieved several very measurable optimizations by removing code and almost none by adding code.

Replies are listed 'Best First'.
Re^6: evil code of the day: global stricture control
by vkon (Curate) on Aug 03, 2007 at 07:57 UTC
    No, we can't.

    that's my English... I meant "discuss we or not, no consensus will be reached"

    As for optimizations by removing code - I do not agree.

    Usually naive shorter implementation will be slower than "advanced" algorithm, like replacing bubble-sort with balanced tree will gain performance with a pay of complex algortihm.

    And here is typical example of other type of optimization by making code larger.

    in the code excerpt below:

    for (;length;) { read (FIN,$_,1_000_000); m0: $_ .= <FIN> unless eof FIN; if (/\\[\r\n]$/) { goto m0 unless eof FIN; } s#(/Title\s*)\(((?:[^\\\)]|\\.)+)\)#$1.repair_title($2)#eg; print FOUT; $len += length; print STDERR "$len\n"; }
    I was forced to replace processing by chunks of 1_000_000 because, when it was simple s/.../.../g, then quite often 1Gb of memory was eaten and still not reached result.