As we're wrapping up the final stages of a cycle, going into the last week of crunch, I've been trying to get the final touches on many things finished and out of the way so I can breath a sigh of relief and move on to a new job. However, I've noticed a few things that have piqued my interest, and wondered if this is just something in our group of developers, or if it's a bit more universal.

I was heavily involved in the design phase of this project. Knowing that the poor folks who both write our documentation (we fondly call it "fiction" ;-}) and compile the test scenarios to write the test plans will base their work on what I write, I try to focus on everything from the user's perspective. I think this is the only common language I have with everyone, as no one, except the coworker who is tasked with actual implementation, will care about data structures, APIs, and the like. (The obvious exception is the one set of APIs we're publicising for others to use.)

As I got away from all that, implementation was well underway, so I went to start testing the implementations against the spec. And I found some egregious errors. Program parameters that weren't even close to doing the right thing, output that didn't quite match what it was supposed to (i.e., unreadable), and, mortal sin of mortal sins, output that wasn't properly translation-enabled (we support 29 languages - over 50% of our business is non-English - and, dealing with China and France both mean that not a lick of English can appear anywhere when those languages are enabled).

So, over the last few weeks, I've opened a half-dozen or so defects. And over the last couple days, I've rewritten large swaths of code (mostly in shell - I'm looking forward to a relocatable perl in hopes of rewriting these very shell scripts!), in the hopes of meeting the spec as it has (now) been documented, thus how the end user will eventually expect it to work.

I realise that, in an ideal environment, there would be ample time to ensure that every developer fully understood not only his or her tasks to perform, but how those cogs fit into the bigger wheel, based on the specs. But I know I don't work in that ideal environment, so I'm guessing most other monks don't, either. In practical terms, I'm coming to the conclusion that either a) it's not realistic to expect everything that is checked in to version control meets all specifications it is intended to, especially not on the first attempt, or b) my team isn't quite as good as I thought they were. I'm not quite sure which - both explanations seem to explain the situation, so I'm looking for more insight. Is this something to really concern myself with, or am I just now finding an area that I need to concentrate on, or perhaps a bit of both?

Replies are listed 'Best First'.
Re: Design. Implement. Bug Report.
by dragonchild (Archbishop) on Feb 23, 2006 at 05:01 UTC
    To emphasize chromatic's excellent points, release cycles need to be short to get the necessary feedback. This technique is called "Tracer Bullets" and is an XP favorite.

    Also, the only specification worth talking about is the automated testcase. Anything else is out of sync with the code the moment it's written. This is called "Test-Driven Development" and is also an XP staple.

    Neither of those techniques, however, require one to drink the XP koolaid. They're both extremely valuable on their own. Try them, sometime.


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: Design. Implement. Bug Report.
by chromatic (Archbishop) on Feb 23, 2006 at 04:43 UTC

    I prefer c) Untested specifications lead to unsatisfied customers.

    Until you give working code to real users and start to get feedback on what's right and what's wrong, the best you can do with your specification is to guess. (I happen to think that the solution to that is to work with your users to write the specification in the form of executable tests, but not everyone agrees.)

Re: Design. Implement. Bug Report.
by ptum (Priest) on Feb 23, 2006 at 18:00 UTC

    I would say (b), that your team probably wasn't as good as you thought they were. Without intending offense, few individual developers or teams of developers are. :)

    Some time ago I served in a Quality Assurance role, something for which I am not, by temperament, particularly well-suited. At the time, I supported five distinct development teams, each reporting to their own management structure. I found (without significant exception) that the teams who had prior experience with independent QA wrote substantially better code than the teams who had been permitted to roll their code directly to production without external QA. And I found that after one cycle with me looking over their shoulders, their code quality improved measurably. There is something about being held publicly and embarrassingly accountable that forces a developer to go back and check their work thoroughly. Developers will work harder to avoid even mild shame and ridicule than they will to save their own weekends from post-release bug fixing sessions.

    If you're finding egregious errors, it is likely because of a lack of decent QA, or the lack of accountability for failing a QA pass. I know that the developers I supported playfully vied with me and with one another to make it through a QA cycle with the fewest number and severity of defects found. On my side, I thoroughly enjoyed the thrill of the hunt, ruthlessly seeking the soft underbelly of the code and mercilessly exploiting its weaknesses, as any self-respecting QA person will.

    As an aside, if you're looking for good QA, look for a person who has that distinctive evil propensity for smelling blood in the water, the person who delights in destruction. I used to work with an excellent QA guy who had that peculiar bent; whenever I want to test software, I think, "What would Kevin do?" Usually, the most diabolical test occurs to me in short order once I get into that mindset. Most developers are so caught up in building that they cannot really conceive of actively working to break their own software -- it really does take a different way of thinking to test software thoroughly.


    No good deed goes unpunished. -- (attributed to) Oscar Wilde
Re: Design. Implement. Bug Report.
by cbrandtbuffalo (Deacon) on Feb 23, 2006 at 18:01 UTC
    I think you are correct in your conclusion that you can't communicate through the spec alone. Language just isn't exact enough to allow you to write a spec, hand it over the wall, and get a product that exactly matches what you wrote.

    But that doesn't mean you need to look over everyone's shoulder, either. The middle ground is to have a good technical manager assigned to the project who checks in on the implementors periodically to see how things are going. They can answer questions and make sure that the developer is clear what functionality the software should have.

    We have seen this issue manifest itself in code reviews. We'll show up at a code review for a few thousand lines of code, and have core design stuff be incorrect. The question there is: how did you get this far down the wrong path without someone noticing?

    The solution, in our case, is our general rule that the technical manager should do informal code reviews every 500 or so lines of code. If things are going fine, it's a very cursory look to make sure the developer is going in the right direction. But if they are on the wrong path, much better to catch it at that phase.

    So I think you do need to communicate throughout a project to make sure expectations and reality are in parallel and correct when they aren't. Written specs alone aren't enough because two reasonable, intelligent people can come to two different conclusions about features and implementation. The only way to resolve these early is to keep an open dialog.

Re: Design. Implement. Bug Report.
by adrianh (Chancellor) on Feb 24, 2006 at 11:49 UTC

    Another vote for chromatic's (c).

    As I got away from all that, implementation was well underway, so I went to start testing the implementations against the spec. And I found some egregious errors.

    I wonder if things would have gone as off track if those tests had been written first?