Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

The Rules of Optimization Club

by petdance (Parson)
on Mar 30, 2012 at 15:03 UTC ( [id://962631]=perlmeditation: print w/replies, xml ) Need Help??

  1. The first rule of Optimization Club is, you do not Optimize.
  2. The second rule of Optimization Club is, you do not Optimize without measuring.
  3. If your app is running faster than the underlying transport protocol, the optimization is over.
  4. One factor at a time.
  5. No marketroids, no marketroid schedules.
  6. Testing will go on as long as it has to.
  7. If this is your first night at Optimization Club, you have to write a test case.
(Adapted from separate lists by David Fetter and Michael Schwern.)

When you are thinking about rule #2, you reach for Devel::NYTProf.

Rule #7 demands an addition to your test suite, which means delving into Test::More or, if you prefer, Test::Most.

One rule that doesn't fit in the rules format: If your code doesn't do what you want it to do, you have no business optimizing it.

xoxo,
Andy

Replies are listed 'Best First'.
Re: The Rules of Optimization Club
by BrowserUk (Patriarch) on Mar 30, 2012 at 21:14 UTC

    Dear Mr petdance,

    We are sorry to inform you that your application to join Optimization Club has been declined.

    In accordance with the spirit of openness enshrined in our charter, the following is a list of reasons for which membership may be declined. All or some maybe be applicable to your application. No further correspondence with regard to the reasoning will be considered.

    • The range and or depth of your fields of practical experience is too narrow.
    • Your answers to our questionnaire show signs of plagiarism.
    • Your answers to our questionnaire show a lack of understanding of our charters primary aims.
    • You may seem to fail to realise, that the opposite of 'optimised' is 'inefficient'.

      That inefficient means resources are being wasted. And wasted resources are wasted money.

    • You may seem not have realised that being IO-bound is not an excuse for not tuning the CPU-costs of your processing.

      Even if those wasted cycles can not be readily utilised by other processes within the system; they do at the very least consume power unnecessarily; this generating excess heat; which in turn requires more power to dissipate.

      And energy costs are becoming a major proportion of IT costs.

    • You may have ascribed to the view that programmer's time is more valuable than processor time.

      This viewpoint completely neglects that the programmer's time is paid for once; but the time user's and customer's spend waiting for code to complete, is repeated over and over. And they pay the bills.

      Favouring programmer time over customer time is an intellectually, morally and financially bankrupt ideology.

    • You may have shown a tendency to assume that your priorities have priority over those of the business you are writing code for.

      Whilst testing is important; suggesting that it takes priority over business needs and market deadlines is the tail wagging the dog.

      There is not reason for the development role without there being a business justification for its product.

      There is no point in delivering the perfectly tested product, if by the time it reaches the marketplace, the demand has gone elsewhere.

      Nor will a bloated or slow product sustain itself in the market place in thge face of better optimised competition.

    • You may have misunderstood the role of testing.

      Testing is a means to an end, not and end in its own right. Testing is a cost centre not a profit centre.

      When the processes of testing incur costs greater than the aggregate costs of an in-the-field failures it might prevent, it becomes an irrecoverable drain on resource.

    • You may have demonstrated a narrow field of view with respect to development methodology and or testing tools.

      Mandated monocultures preclude deriving benefits from alternative approaches and mechanisms.

    For and on behalf of The Optimization Club.

    Mostly humour. But as with all the best humour, there is a strong foundation of reality.

      You're inferring a lot that I didn't say or imply.

      You want to optimize for energy consumption and heat output? Go ahead. Just measure before and after so you know that it was worth your time.

      While I didn't say anything about programmer vs. processor time, the rules about measuring apply there as well. Depends on if you're building a compiler or a one-time data conversion script. And how much does your programmer cost? Again, measure.

      Measure measure measure so you know for sure.

      xoxo,
      Andy

        one-time data conversion script.
        I would not write tests for a one-time data conversion script. There's only one test for a one-time script: its run.

      This viewpoint completely neglects that the programmer's time is paid for once; but the time user's and customer's spend waiting for code to complete, is repeated over and over. And they pay the bills.

      That very much depends on the task at hand. Consider this: on a database there are 10,000 or so payment schedules recorded for £3 every 4 months, but it turns out this was a systematic data import error, and it should be £4 every 3 months.

      Given a choice between:

      • Programmer spends 2 hours writing a script that fixes problem at a rate of one payment schedule per second; and
      • Programmer spends 4 hours writing a script that fixes problem at a rate five payment schedules per second.

      Then chances are that the first solution would be preferred. OK, so the script is going to take close to three hours to run, but it's a one-off fix, and nobody has to stand over it while it's running.

      There are cases where the developer's time is more precious than the performance of the program. And there are cases where the performance of the program is everything. Most, of course, are somewhere in between.

      perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
        but it's a one-off fix, and nobody has to stand over it while it's running.

        Exactly! One off and no user waiting. Unimportant, and beyond the scope of the remit.

        Contrast with millions of people every hour of every day, waiting a couple of extra seconds for their ging/gang/boogle/yoohoo/facespace/mybook/tweedle/amabay/tescutters interactions.

        Excess cycles consumed by one customer are lost to others waiting. IO-bound doesn't mean either non-urgent or non-critical.

        Even for far smaller scale businesses, the loss of individual customers to impatience with sluggish backends and overindulgent, pretty frontends can be critical to your bottom line.As the world gets smaller and the choices of places to shop get ever wider, efficiency is critical to first establishing and then keeping a customer base.

        Web pages that fail to respond within 10 seconds; or that aren't ready to accept input with 3; just don't get a chance to sell me anything, much less advertise to me -- by then I've moved on to the next hit on the search engine list.

        Leaving optimisation as an after thought, rather than building it in and measuring it as you design and develop your code is like building a shop with narrow, difficult to open doors and not switching the lights on.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        The start of some sanity?

Re: The Rules of Optimization Club
by eyepopslikeamosquito (Archbishop) on Mar 30, 2012 at 23:33 UTC

    While code simplicity and clarity normally trump efficiency, efficient idioms should routinely be preferred to inefficient ones whenever code maintainability does not suffer. Knowing and understanding the efficiency of basic language constructs is part of mastering a programming language.

    Alexandrescu and Sutter (C++) for example, in Guideline 9, "Don't pessimize prematurely", advise you to prefer efficient idioms to less efficient ones whenever the choice is arbitrary. This is not premature optimization rather it is the avoidance of gratuitous pessimization. Sometimes, as with i++ versus ++i, or passing objects by value versus const reference, the more efficient alternative is no harder to read and maintain. Even better, declaring objects at point of first use, for instance, the more efficient idiom also improves code maintainability.

      Knowing and understanding the efficiency of basic language constructs is part of mastering a programming language.
      This. The "methodology" of writing whatever arbitrarily-inefficient code you first think of that produces the desired output, then doing a bunch of profiling followed by tweaking and/or extensive rewrites, is a blight. If you write something both readable and reasonably efficient in the first place, there's a good chance you won't have to mess with it later. Having a clue about hardware and data structures really pays.

      This, by the way, is why I've always been annoyed by the "ArrayList" in Java (don't get me started on the "HashArrayListWhatever" used in Lua): depending on your expected access patterns, you'll want either an array or a list. In some cases, you'll want something fancier or more specialized. Arrays give you fast random access and slower insertions; doubly-linked lists give you fast insertion and slower random access; singly-linked lists give you fast insertion at one end with minimal space overhead; balanced trees scale well for most operations, but have a significant constant penalty. If you start out with the right data structure, you may not have to bother with laborious profile-guided optimization.

Re: The Rules of Optimization Club
by tinita (Parson) on Mar 31, 2012 at 12:01 UTC

    I hope you realize how ironic this sounds in this forum. my typical usage here is: click on a thread, than switch tab to a different forum/rss feed/whatever, read 2 or 3 articles, then switch back to perlmonks where the thread has finished loading ;-)

    first: I think everyone would agree that optimizing without having a clue is a waste of time. playing around with Benchmark and Devel::NYTProf can be really enlightening.
    if you don't have a clue about algorithms and complexity you still can write programs that do what you want, that are well written in terms of maintainability and "fast enough".

    but: if nobody ever cared about writing highly optimized perl modules, nobody would want to write applications and websites with it.

    I'm actually sick of people warning about premature optimization over and over. in the last weeks I have been optimizing my forum software which has been running live for over three years. I did many different optimizations, and by logging the different request times and viewing them in munin graphs I could clearly see what kind of optimizations were more successful and which less.
    I can estimate that some optimizations will be only worth if the website had more traffic. But that's no reason for not doing it. It's a software that I want to call "scalable".
    I was talking about some of my thoughts in an IRC channel, and one person said to me that I was doing premature optimization. wtf?

    so, yes: measure before or during optimization. if you are experienced though, often you don't need to measure beforehand.
    but stop warning people about optimization in general.

      I see far too many people trying to improve their execution speed when they have no business doing so.

      The node that prompted my posting the Rules of Optimization Club was this one (Print Vs Return In a Subroutine) where the writer wanted to know if it was faster to use print or return. If he wants real speed, he can call exit, too.

      The PHP world especially seems to be filled with people who worry about whether it's faster to use interpolation or concatenation to build strings, while tweaking a program that makes SQL queries against unindexed tables and returns the results over the Internet.

      It is for these people that the Rules of Optimization Club are most intended.

      xoxo,
      Andy

        I agree with that. it's just that I read people quoting the famous "premature optimization" thing way too often, and most of the time they forget to mention (or probably to even read) the rest of the quote.
        So I got really allergic to any "don't optimize" statements =)
      but stop warning people about optimization in general.

      Hal-ley-loo-yaah!

      My first reaction to accusations of "premature optimisation", is that the accuser is: a) aware of the limitations of their knowledge of the required techniques; b) aware of the techniques, but too lazy to pursue them; 3) a marketeer who'd rather sell you his latest gizmo, gadget -- be they software or hardware -- than show you how to make more effective use of the one you brought from him last year.

      Permute from the three.

      S'not always the case, but more often than not.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

Re: The Rules of Optimization Club
by JavaFan (Canon) on Mar 30, 2012 at 15:26 UTC
    If your code doesn't do what you want it to do, you have no business optimizing it.

    Ah, you haven't been out of junior class yet.

    If I lose more sales because of performance issues than I lose because of bugs, or missing (edge) cases, performance *is* more important than making it do what I want. Or, if the boost in sales of cutting corners is more than the lost in sales because of the corners being cut, I have all the business case I need to optimize it.

    To give an example, if I use software that calculates an "optimal" route for me to deliver packages to 40 customers, I rather have a slightly optimal route that's ready by the time I start driving, then that I've to wait another few years just to find out I could save a few seconds from my trip.

    Seasoned programmers know when a solution is "good enough" and performance becomes as important (or even more important) as being "perfect".

      If your code doesn't do what you want it to do, you have no business optimizing it.
      If I lose more sales because of performance issues than I lose because of bugs, or missing (edge) cases,

      No, because "does what I want it to" is not the same as "runs perfectly with every conceivable input". I believe the OP chose those words precisely to be vague enough to allow that "good enough" is good enough.

      I reckon we are the only monastery ever to have a dungeon stuffed with 16,000 zombies.
      Ah, you haven't been out of junior class yet.

      I wonder what makes you think that that's a good thing to say, even if it were accurate.

      xoxo,
      Andy

        Ah, you haven't been out of junior class yet.
        I wonder what makes you think that that's a good thing to say, even if it were accurate.

        He seems to believe it is a "good thing to say" because he's been rewarded handsomely (XP-wise) for trollish behavior on PM.

        Other monks appear to think that it is "funny" or "clever".

      Great point, JavaFan, (as always...) Of course, if performance is a higher requirement than functionality, then is the function really broken if it meets performance but is suboptimal in its functional scope (aka. fast but got bugs)?

      I think the point he's trying to make (and I could be wrong) is that if you optimize a solution prior to getting it fully scoped, you could encounter elements of scope that destroy any work you did in optimizing performance, essentially you're spinning your wheels. Likewise you can introduce unnecessary architectural complexity if you optimize before finishing any functional requirements.

      Of course you should never dismiss performance when designing a system...

        if performance is a higher requirement than functionality
        It's neither a "higher" requirement, nor a "lower" requirement. Performance is always a requirement. In the end, someone or something is waiting for the results, and they aren't going to wait forever. Now, where the balance lies between performance and functionality, that depends on a case-by-case bases. I don't think the blanket statements made by the OP reflect any real world scenarios (which surprise me, as petdance usually comes across as someone with both feet in the business world).
        Likewise you can introduce unnecessary architectural complexity if you optimize before finishing any functional requirements.
        Or the other way around. ;-) I guess most of us have encountered systems that were beautifully designed, and has implemented every little aspect that was dreamed up before hand, but noone uses it because performance was factored in too late.
      If your code doesn't do what you want it to do, you have no business optimizing it.
      To give an example, if I use software that calculates an "optimal" route for me to deliver packages to 40 customers, I rather have a slightly optimal route that's ready by the time I start driving, then that I've to wait another few years just to find out I could save a few seconds from my trip.

      You've just given a perfect example of why the OP was correct, not a counterargument. In the above example, "what you want the code to do" is to give you a "slightly optimal route" in a reasonable amount of time. Your trade-off is that there might be more optimal routes if you ran the code for longer.

      If you're prioritising performance over correctness, then you're seeking an answer that is "good enough" given your time-frame. If your code is doing that, then your code is doing what you want it to. Note that "good enough" is important here. If your code was giving you obviously wrong answers, blindingly fast, you wouldn't be happy. The answers have to at least look correct enough. Ideally you'd have a test suite that flagged your edge cases and bugs, so that when you had time to deal with them you could, even while you made the common cases run well enough and fast enough to keep your business ticking over.

      In different fields people would prefer correctness over performance. For example, I'd rather receive my credit card statement a day or two later than usual than have all the numbers added up wrongly. (In this case where Australia Post (the underlying protocol) is slower than the generation and printing of my statement, this is especially true).

Re: The Rules of Optimization Club
by pemungkah (Priest) on Apr 04, 2012 at 18:17 UTC
    How about altering rule 1 to "You do not optimize if you don't need to"?

    This is getting back to the YAGNI argument: if you don't need to implement it now, wait until you do.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://962631]
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others studying the Monastery: (4)
As of 2024-03-28 15:24 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found