Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re: On Quality

by adrianh (Chancellor)
on May 10, 2005 at 10:24 UTC ( [id://455501]=note: print w/replies, xml ) Need Help??


in reply to On Quality

It got me wondering if there were any concise, potentially catchy phrases that can prove useful in aligning everyone's thoughts around the core ideas of producing quality output.

None of them original to me, but I agree with all of them:

  • You Arent Gonna Need It - Don't write code because you think you'll need it. Write it when you actually need it. Keeps the code simpler and so hopefully less complicated.
  • Do The Simplest Thing That Could Possibly Work - if you're ever in doubt as to what to code - code up the simplest possible thing that will do the job. It's easy to make a simple system complicated. It's hard to make a complicated system simple. Get something simple down and incrementally add code and refactor.
  • Don't Repeat Yourself (or alternatively Once And Only Once) - duplication is the root of many, if not most, maintainence nightmares so don't do it. Which leads us nicely to...
  • Refactor Mercilessly - refactor all of the time. Any time you see duplication, lack of clarity, etc. fix it. Fix it straight away.
  • Test First - write your tests before your code. That way you know when your code works.

My own personal mantra on software development has changed many times over the years. Currently it goes something like:

  • Minimalism - Make the code/development-process as small as possible (but no smaller). Don't add code/process unless it is absolutely necessary. Before you add it ask yourself whether there is anything you could change that would remove the necessity of adding it.
  • Tight Feedback - Make your feedback loops as tight as possible (but no tighter). Continually writing tests before you code is better than writing tests at the end of each week. Pair programming is better than a weekly code review. Failing fast is better than an error that comes long after the cause. Etc.
  • Introspection - Look at what you do and how you do it. Try and make it better. Do this all of the time.
  • Transparency - Make it obvious to everybody how everything works. Why does this code exist? Because of this user story. Why does this work this way? Because of this test. How close are we to completion? Look at this big visible chart of completed stories on the wall. Etc.

Unfortunately - that's hardly pithy :-)

Replies are listed 'Best First'.
Re^2: On Quality
by Tanktalus (Canon) on May 10, 2005 at 19:54 UTC

    Interesting. I can see where some of these may actually contradict each other. For example, "Do The Simplest Thing That Could Possibly Work" often means "cut and paste" which is the exact opposite of "Don't Repeat Yourself". Refactoring is not the simplest way to get things to work. Which, it seems, after following some of your links, is part of their description. To be honest, they started sounding like complex ideas all over again ;-) You need enough experience to tell when something is simple yet still abiding by all the other rules.

    Further, I really have to disagree with your first one: "You Arent Gonna Need It". I've spent years maintaining a project that was developed this way. And we're just finishing up over the next couple of months the complete, ground-up rewrite (which started back in 2001). The rewrite has flexibilities that we'll never use. Or at least, we think we'll probably never use. But more than one flexibility has caught us by suprise when it proved useful when a last-minute design change comes in which we can handle by a small tweak in a data file, or minor code changes/additions (or both). We have a marketing department that likes to make changes to our product lineup and functionality after we ship the golden master CDs to manufacturing. And I'm not exaggerating here one bit. You don't get this type of flexibility by writing code when you need it, you get this type of flexibility by writing a framework that does it already.

    For the curious, we had to tell manufacturing to ignore the CDs that were on their way, and reburn CDs immediately, and then courier those. But we could add our changes in minutes. Waiting until the requirements came in would have caused a delay of hours or days, which likely would have meant we couldn't meet the new requirements and the schedule at the same time, which, presuming a competent marketing department, would mean we couldn't meet end-customer needs.

      I can see where some of these may actually contradict each other. For example, "Do The Simplest Thing That Could Possibly Work" often means "cut and paste" which is the exact opposite of "Don't Repeat Yourself"

      Rather than looking at them as contradictory - look at them as working together. Doing the simplest thing that can possibly work might be to copy and past something. Which gives us a code smell of duplication since we should do things once and only once. However since we refactor mercilessly we'll quickly remove that duplication into some kind of common abstraction. So we have clean code. Problem solved.

      These are not things you do in isolation - you do them all together all of the time. Doing the simplest thing that can possibly work is a starting point not an end. Synergy is a wonderful thing :-)

      Refactoring is not the simplest way to get things to work

      I used to think that. I don't anymore. I've found incrementally growing and refactoring a framework to be an enormously effective way of developing flexible high quality applications.

      Further, I really have to disagree with your first one: "You Arent Gonna Need It" … You don't get this type of flexibility by writing code when you need it, you get this type of flexibility by writing a framework that does it already.

      Colour me slightly suspicious with your diagnosis of the fault with your first system :-) Why was the original project so hard to change? Was there duplication? Was there scope for refactoring? How did you know what flexibility you needed to add to the second system? Were there requirements that weren't made explicit in the first system? Etc.

      The reason I'm suspicious is that the flexible framework that you describe is what I'd expect to produce by following YAGNI and the other practices I briefly outlined.

        Colour me slightly suspicious with your diagnosis of the fault with your first system :-)

        Sorry - my box of crayons seems to be missing that colour ;-)

        • Fault #1: all data was embedded in the code.
        • Fault #2: data that was logically similar was not consistantly grouped locally - it was usually strewn over many shell functions or even many modules.
        • Fault #3: while the absolutely most-changed data (multiple times per day) was locallised to only two logical locations, the next 4 top-most changed data types (every week through every couple of months) were not co-located in any sensical number of locations.
        • Fault #4: data that was changed entirely infrequently (every year or two) was more localised than the most frequently changed data.
        • Fault #5: it was shell script fer cryin' out loud. ;-) Seriously - shell script means "no local variables". Everything is always global. Which makes it very dangerous to use new variables as they may already be in use if they are common names, or they're very long if they aren't common names. Ok, that may be a bit exaggerated, but you get the idea. Imagine no "my", "our", or "local" keywords in perl, and then you have the concept.
        Why was the original project so hard to change?

        See above. To change behaviour, I generally had to go through a dozen files (or more) to figure out the scope, even if I only had to end up changing a single file.

        Was there duplication?

        Somewhat surprisingly, no. The beginning phase set up a bunch of global variables, the middle phase used those variables, and the final phase ... also used those variables during cleanup. There was duplication that was outside of this project (that is, there was duplication of information, not code, between this project and other projects) which, due to the increased flexibility we now have, we don't need anymore (rather than hardcoding datapoint "X" in both the shell code and other code, we now have datapoint "X" in our data files, and use perl to extract it and generate the other code dynamically - still hardcoded, but a simple "make all" will get it all in sync).

        Was there scope for refactoring?

        I'm not entirely sure what this means. Probably not. ;-)

        How did you know what flexibility you needed to add to the second system?

        Years of experience with the original system seeing how requirements change over time, and seeing where requirements may change. Understanding the differences between limitations of the product space we're in vs assumptions based on the marketing decisions at the time. Rule #1 of the new code: no assumptions. We're not doing that perfectly yet, but I'm working on it.

        Were there requirements that weren't made explicit in the first system?

        At the time it was developed, long before I joined the company, the scope was incredibly small. So they did exactly what was needed at the time, no more. And it worked great. By the time I joined the team, it was already on the verge of bursting. But I didn't know that, so I kept using it.

        After a couple of years at this, I gained enough experience to be able to see the larger design. (Note how I'm not claiming that it's the perfect design, just larger.) As I said above, the language, which may have been sufficient when we started, was part of the limitations of the existing system (imagine a complex data structure in shell - ewwww!). So a rewrite was necessary anyway.

        The rewrite was a method by which we could gain the flexibility we required to meet needs that we often don't even know about until they're due. We've reduced the estimated effort (and, of course, the actual effort) required for changes by 50% or more on the development side, and we're working on the overall testing side as well.

        As to the refactoring as the simplest way to get things to work comment. I completely agree - growing and refactoring are awesome ways to develop flexible high quality applications. But that's not the simplest way to get the immediate job accomplished. That's the simplest way to get the long term unknowns accomplished, but not all of my management chain is enthused about paying for "possible future" enhancements when they get in the way of an upcoming shipment, despite the promise that changes required (whether before or after the upcoming shipment) will cost 50% to 300% more than if we spent an extra 10% now.

Re^2: On Quality
by Anonymous Monk on May 11, 2005 at 19:07 UTC
    • You Aren't Going to Need It: Unless, of course, you are. The whole point of coding it beforehand is that later you won't have time to fiddle with coding and testing; whereas right now, you do. Going back to the customer, and saying: "you can't do that (reasonable request) with our product" sells fewer units than "click on the <enable request> checkbox".
    • Do the Simplest Thing That Couple Possibly Work: I agree with this one whole heartedly. Simple code tends to be correct code.
    • Once And Only Once: Is rarely the simplest thing to do, because it requires abstraction. Abstracting away from the business requirements is one more thing that can go wrong. Having, say, a global variable or class default that has to be tracked down is often harder to find than just calling the functions explictly, without implicit values being passed around. Worse still, once you move your code around, from say a nested if cascade to a hash table lookup with coderefs, in a month, the business requirements will require that you move to a different model of abstraction to handle a new problem. Now you've got to go back to the old code again... which is needless repetition of coder time.

      A decent search and replace, applied intelligently (or better still, a language-specific refactoring browser) can make mass changes at a single stroke: without the added confusion of implict values lurking in the background.

    • Refactor Mercilessly: This is a pipe dream. Coder time is very, very expensive. This is one of those comfortable academic ideas that doesn't make much business sense: doing a lot of work on a program, that if done correctly, will result in a product that does the same exactly same thing as it did before, and if done wrong might ruin the product.

      It's a nice idea: it's nice to upgrade code when practical, but there are lots of times and places where the economics just don't justify tampering with things that don't need to be tampered with. Yes, you can get nice results. No, they're probably not profitable for the effort expended.

    • Test First: This is a great concept. It doesn't work in most business settings, though. In order to test, you need to know exactly what you're testing, and how. To properly test a section of the program, you need to define what that section does, and how it does it (all preconditions, post conditions, and side effects, etc.)

      You can't do that until you write the code: and determine that this I/O function X sets global variable Y, which you'll manage by using wrapper functions Z1, Z2, and Z3. Only then can you begin to write meaningful unit tests for Z1, Z2, and Z3: otherwise, the best you can do is write in some wishful thinking, which you'll probably have to tear out, and replace with new tests later. So you might as well wait until the end: otherwise, you're wasting effort (and programmer time is expensive).

    • Other thoughts:
      Minimalism: is just plain good.
      Tight Feeback: is costly. If you have the money to burn, it may or may not be profitable.
      Introspection: is good when it works; and a complete waste of time when it doesn't pay off. Best reserved for people with the actual power to change things. Usually, the real problem is: "we don't have enough money/resources/manpower to solve this problem correctly"
      Transparency: again, this typically generates better code, at a cost of time (and money). Worth it in most cases: but may be hard to persuade management.

      Writing good code is a trivial excercise: any half-decent coder can learn to do it. Writing good code on a budget, without a decent testing environment, and with sharp real time constraints is brutally hard. Unfortunately, that's largely today's business climate... --
      AC

      The whole point of coding it beforehand is that later you won't have time to fiddle with coding and testing; whereas right now, you do.

      The idea is that if you can build something smaller and faster while keeping the code clean enough to add to later, you can deliver it months earlier. I think we've all seen programmers waste time on pet abstractions in pursuit of a cool architectural idea.

      This is one of those comfortable academic ideas that doesn't make much business sense: doing a lot of work on a program, that if done correctly, will result in a product that does the same exactly same thing as it did before, and if done wrong might ruin the product.

      If you let things get to the point where refactoring is "a lot of work", you've already screwed up. You're supposed to do it as you build the program, so that you get the benefits of cleaner code while you are building it.

      In order to test, you need to know exactly what you're testing, and how. To properly test a section of the program, you need to define what that section does, and how it does it (all preconditions, post conditions, and side effects, etc.) You can't do that until you write the code

      At some point, you define the interface. It could be while coding, or it could be while writing tests. You will probably make some changes over time, but the work of defining the interface has to happen anyway, so it's not wasted time.

      You Aren't Going to Need It: Unless, of course, you are.

      Well then you do need it don't you and it isn't a situation where YAGNI applies :-)

      The whole point of coding it beforehand is that later you won't have time to fiddle with coding and testing; whereas right now, you do.

      Right now I have a set of features to implement. Some the customer needs now. Some the customer needs later. YAGNI is all about doing stuff the customer needs now first. That way I end up spending time coding and testing features the customer actually needs, rather than spending time coding and testing features that we don't need until some indeterminate date in the future - by which time the requirements may have changed anyway.

      Once And Only Once: Is rarely the simplest thing to do, because it requires abstraction. Abstracting away from the business requirements is one more thing that can go wrong.

      Removing duplication abstracts away from business requirements? Usually the opposite in my experience.

      Having, say, a global variable or class default that has to be tracked down is often harder to find than just calling the functions explictly, without implicit values being passed around.

      Sorry I just don't follow what you're getting at here. Removing duplication invariably makes things clearer in my experience so I think we must be talking about different things.

      Worse still, once you move your code around, from say a nested if cascade to a hash table lookup with coderefs, in a month, the business requirements will require that you move to a different model of abstraction to handle a new problem. Now you've got to go back to the old code again... which is needless repetition of coder time.

      I don't see how moving from if/then/else statements to a table look up is removing duplication?

      A decent search and replace, applied intelligently (or better still, a language-specific refactoring browser) can make mass changes at a single stroke: without the added confusion of implict values lurking in the background.

      In my experience people waste far more time dealing with bugs related to duplication in code than they would save by avoiding refactoring the duplication out.

      Refactor Mercilessly: This is a pipe dream. Coder time is very, very expensive. This is one of those comfortable academic ideas that doesn't make much business sense: doing a lot of work on a program, that if done correctly, will result in a product that does the same exactly same thing as it did before, and if done wrong might ruin the product.

      It's not a pipe dream since lots of people do it with a fair amount of success.

      Short term expense, long term profit. If you are refactoring mercilessly it's a very short term expense since it's a background task that you're doing all of the time. Refactoring only gets expensive if you let the code get messy. Clean the kitchen after every meal, not once a month.

      (which reminds me - I need to do the washing up :-)

      Test First: This is a great concept. It doesn't work in most business settings, though.

      I know lots of business coders, myself included, who'll disagree with you there :-)

      In order to test, you need to know exactly what you're testing, and how. To properly test a section of the program, you need to define what that section does, and how it does it (all preconditions, post conditions, and side effects, etc.)

      That is, as it were, the point. You use the tests you write to define the requirements for the code that you write. It works really well.

      Tight Feeback: is costly. If you have the money to burn, it may or may not be profitable

      Is it more expensive to find your code fails tests now or next week when QA gets it? Is it more expensive to know the users hate a feature now or after the manuals are printed? Is it more expensive to know that the code your writing now duplicates a feature Bob wrote last week as you start, or two weeks later in the code review?

      Tight feedback loops save money. In spades.

      Introspection: is good when it works; and a complete waste of time when it doesn't pay off. Best reserved for people with the actual power to change things. Usually, the real problem is: "we don't have enough money/resources/manpower to solve this problem correctly"

      Often resources aren't the problem. It's resources being badly applied, usually because of foolish project management practices and overcomplicated development methodologies. Give me a well organised group of 12 developers over a badly organised 120 any day of the week.

      Transparency: again, this typically generates better code, at a cost of time (and money). Worth it in most cases: but may be hard to persuade management.

      I have to admit I've got to that stubborn age when I'm going to dig out the people in charge and shake them until they listen :-)

      Writing good code is a trivial excercise: any half-decent coder can learn to do it.

      I have to disagree. Writing good robust code is damn difficult. I've been a professional programmer for more than half my life now and I'm still finding new and better ways to do it.

      If it's so damn trivial why do so many people bugger it up on such a regular basis?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://455501]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (5)
As of 2024-04-19 23:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found