I won't say anything about premature optimization because you've been lectured enough about it :)
Since you brought it up :), and as proxy to everyone else that did so, I'll respond to the lecturing here.
There is a strange, but verifiable fact about the process of optimisation: The earlier* you do it, the easier, more effective and less troublesome it becomes.
*Here the word "earlier" is deliberately chosen to contrast with the word "premature". You can look up the various definitions for premature, but let's derive our own:
Now here's the thing. You develop your latest greatest XYZ application. You follow good, sound, SE/CS design and development principles. You eshew optimisations in favour of simple, clear maintainability. You test, document and verify. Your application is mature--it just runs too slowly.
So now it is time to benchmark and optimise. But to do so, except for the most simple of applications, you have to dissect and re-write large parts of your application, re-write large parts of your test suite, and re-test and re-verify the whole darn thing.
Retro-fitting optimisations, whether algorithmic or implementation, to an existing complex application is always considerably harder than writing them efficiently as you go.
The problem with the interpretation of the "premature optimisation" axiom is that all too often people fail to miss that coding is hierarchal.
With the exception of the simplest of applications, we code in layers. And the lower down in the hierarchy that a layer comes, the more effect it's efficiency (or otherwise) will have upon the overall efficiency of our application. And, attempting to improve the efficiency of those lower layers, once we already have upper layer dependencies upon them, the harder it is to do without breaking those upper layers.
It has to be seen that optimising library code, inner classes and similar lower layers, at the time of their creation--is not premature! It maybe before the maturity of the (first) application that uses them, but optimising a library or class prior to it inclusion in an application is an integral part of it's maturisation.
If we are serious about code reuse, we must realise that when we code classes and modules that we intend to be reusable, that it is incumbent upon us to consider making them as efficient as we can (subject to the bounds of correctness, usability and reasonable maintenance), regardless of whether that efficiency is required for the first application for which they are designed.
Only in this way can we ensure that those classes, modules and libraries will be efficient enough for the next application that would like to use them. And the one after that.
Leaving (reasonable, achievable) optimisations as an after-thought, as a last minute "we'll do it if we need to" step in the overall development plan, can lead to huge costs and overruns.
Imagine trying to retroactively gain the kind of benefits we all derive from the internal optimisation that Perl has, on a case by case basis at the application level?
Alternatively, if Perl was able to save the byte code representation of it's code, and relocatably reload these, (very effective, mature and clever) hacks like mod_perl would not be necessary+.
+For their performance benefits. In deference to merlyn's advice about mod-perls's other features.
In reply to Re^2: Optimisation (global versus singular)
by BrowserUk
in thread Optimisation isn't a dirty word.
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |