Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^4: On Quality

by Tanktalus (Canon)
on May 11, 2005 at 00:15 UTC ( [id://455815]=note: print w/replies, xml ) Need Help??


in reply to Re^3: On Quality
in thread On Quality

Colour me slightly suspicious with your diagnosis of the fault with your first system :-)

Sorry - my box of crayons seems to be missing that colour ;-)

  • Fault #1: all data was embedded in the code.
  • Fault #2: data that was logically similar was not consistantly grouped locally - it was usually strewn over many shell functions or even many modules.
  • Fault #3: while the absolutely most-changed data (multiple times per day) was locallised to only two logical locations, the next 4 top-most changed data types (every week through every couple of months) were not co-located in any sensical number of locations.
  • Fault #4: data that was changed entirely infrequently (every year or two) was more localised than the most frequently changed data.
  • Fault #5: it was shell script fer cryin' out loud. ;-) Seriously - shell script means "no local variables". Everything is always global. Which makes it very dangerous to use new variables as they may already be in use if they are common names, or they're very long if they aren't common names. Ok, that may be a bit exaggerated, but you get the idea. Imagine no "my", "our", or "local" keywords in perl, and then you have the concept.
Why was the original project so hard to change?

See above. To change behaviour, I generally had to go through a dozen files (or more) to figure out the scope, even if I only had to end up changing a single file.

Was there duplication?

Somewhat surprisingly, no. The beginning phase set up a bunch of global variables, the middle phase used those variables, and the final phase ... also used those variables during cleanup. There was duplication that was outside of this project (that is, there was duplication of information, not code, between this project and other projects) which, due to the increased flexibility we now have, we don't need anymore (rather than hardcoding datapoint "X" in both the shell code and other code, we now have datapoint "X" in our data files, and use perl to extract it and generate the other code dynamically - still hardcoded, but a simple "make all" will get it all in sync).

Was there scope for refactoring?

I'm not entirely sure what this means. Probably not. ;-)

How did you know what flexibility you needed to add to the second system?

Years of experience with the original system seeing how requirements change over time, and seeing where requirements may change. Understanding the differences between limitations of the product space we're in vs assumptions based on the marketing decisions at the time. Rule #1 of the new code: no assumptions. We're not doing that perfectly yet, but I'm working on it.

Were there requirements that weren't made explicit in the first system?

At the time it was developed, long before I joined the company, the scope was incredibly small. So they did exactly what was needed at the time, no more. And it worked great. By the time I joined the team, it was already on the verge of bursting. But I didn't know that, so I kept using it.

After a couple of years at this, I gained enough experience to be able to see the larger design. (Note how I'm not claiming that it's the perfect design, just larger.) As I said above, the language, which may have been sufficient when we started, was part of the limitations of the existing system (imagine a complex data structure in shell - ewwww!). So a rewrite was necessary anyway.

The rewrite was a method by which we could gain the flexibility we required to meet needs that we often don't even know about until they're due. We've reduced the estimated effort (and, of course, the actual effort) required for changes by 50% or more on the development side, and we're working on the overall testing side as well.

As to the refactoring as the simplest way to get things to work comment. I completely agree - growing and refactoring are awesome ways to develop flexible high quality applications. But that's not the simplest way to get the immediate job accomplished. That's the simplest way to get the long term unknowns accomplished, but not all of my management chain is enthused about paying for "possible future" enhancements when they get in the way of an upcoming shipment, despite the promise that changes required (whether before or after the upcoming shipment) will cost 50% to 300% more than if we spent an extra 10% now.

Replies are listed 'Best First'.
Re^5: On Quality
by adrianh (Chancellor) on Jul 26, 2005 at 20:58 UTC

    Belated response....

    Sorry - my box of crayons seems to be missing that colour ;-)

    It's a sort of greeny pink :-)

    Fault #1: all data was embedded in the code.

    Of course that's not necessarily a fault in of itself. It's only a fault if it makes that data hard to change.

    Fault #2: data that was logically similar was not consistantly grouped locally - it was usually strewn over many shell functions or even many modules.

    Fault #3: while the absolutely most-changed data (multiple times per day) was locallised to only two logical locations, the next 4 top-most changed data types (every week through every couple of months) were not co-located in any sensical number of locations.

    Fault #4: data that was changed entirely infrequently (every year or two) was more localised than the most frequently changed data.

    That sounds like a lot of dodgy abstractions and duplication of responsibility to me. Time for some merciless refactoring.

    As perrin said YANGI is all about avoiding doing stuff until you actually need it - not avoiding it once you need it.

    Fault #5: it was shell script fer cryin' out loud. ;-) Seriously - shell script means "no local variables". Everything is always global. Which makes it very dangerous to use new variables as they may already be in use if they are common names, or they're very long if they aren't common names. Ok, that may be a bit exaggerated, but you get the idea. Imagine no "my", "our", or "local" keywords in perl, and then you have the concept.

    There you might have me :-) Early architectural decisions like platform and development language are hard ones to change after the fact. If you're stuck with a language with little support for higher level abstraction you're potentially heading for a rewrite.

    Of course there are strategies to help avoid this. For example:

    • Avoid developing in languages that will cripple you later on :-)
    • Address the problem early. I know when I've been in situations like this it's been obvious long before the political will was found that a move to another language was necessary.
    • Look for ways to move to another language incrementally while keeping things running. It can actually save time in the long term to add some code to the old system so you can decouple a few bits and change them incrementally.
    At the time it was developed, long before I joined the company, the scope was incredibly small. So they did exactly what was needed at the time, no more. And it worked great. By the time I joined the team, it was already on the verge of bursting. But I didn't know that, so I kept using it.

    Sounds like the classic Big Ball Of Mud. My sympathies.

    After a couple of years at this, I gained enough experience to be able to see the larger design. (Note how I'm not claiming that it's the perfect design, just larger.) As I said above, the language, which may have been sufficient when we started, was part of the limitations of the existing system (imagine a complex data structure in shell - ewwww!). So a rewrite was necessary anyway.

    Yeah, with a shell script you're pretty much stuffed :-)

    That's the simplest way to get the long term unknowns accomplished, but not all of my management chain is enthused about paying for "possible future" enhancements when they get in the way of an upcoming shipment, despite the promise that changes required (whether before or after the upcoming shipment) will cost 50% to 300% more than if we spent an extra 10% now.

    If you've not come across it already I've found the "Technical Debt" metaphor a really useful tool in helping management understand this sort of thing.

Re^5: On Quality
by perrin (Chancellor) on May 12, 2005 at 15:09 UTC
    "You Aren't Gonna Need It" doesn't work in isolation. You have to write clean code and refactor it if it gets messy. The code in your example sounds like it was allowed to become a ball of mud.

    YAGNI is about leaving out gold-plating and unnecessary features and abstractions, not about writing messy unmaintainable code. Clean code is something you are always going to need.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://455815]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (3)
As of 2024-03-28 15:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found