Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re^3: On Quality

by adrianh (Chancellor)
on May 10, 2005 at 23:41 UTC ( [id://455804]=note: print w/replies, xml ) Need Help??


in reply to Re^2: On Quality
in thread On Quality

I can see where some of these may actually contradict each other. For example, "Do The Simplest Thing That Could Possibly Work" often means "cut and paste" which is the exact opposite of "Don't Repeat Yourself"

Rather than looking at them as contradictory - look at them as working together. Doing the simplest thing that can possibly work might be to copy and past something. Which gives us a code smell of duplication since we should do things once and only once. However since we refactor mercilessly we'll quickly remove that duplication into some kind of common abstraction. So we have clean code. Problem solved.

These are not things you do in isolation - you do them all together all of the time. Doing the simplest thing that can possibly work is a starting point not an end. Synergy is a wonderful thing :-)

Refactoring is not the simplest way to get things to work

I used to think that. I don't anymore. I've found incrementally growing and refactoring a framework to be an enormously effective way of developing flexible high quality applications.

Further, I really have to disagree with your first one: "You Arent Gonna Need It" … You don't get this type of flexibility by writing code when you need it, you get this type of flexibility by writing a framework that does it already.

Colour me slightly suspicious with your diagnosis of the fault with your first system :-) Why was the original project so hard to change? Was there duplication? Was there scope for refactoring? How did you know what flexibility you needed to add to the second system? Were there requirements that weren't made explicit in the first system? Etc.

The reason I'm suspicious is that the flexible framework that you describe is what I'd expect to produce by following YAGNI and the other practices I briefly outlined.

Replies are listed 'Best First'.
Re^4: On Quality
by Tanktalus (Canon) on May 11, 2005 at 00:15 UTC
    Colour me slightly suspicious with your diagnosis of the fault with your first system :-)

    Sorry - my box of crayons seems to be missing that colour ;-)

    • Fault #1: all data was embedded in the code.
    • Fault #2: data that was logically similar was not consistantly grouped locally - it was usually strewn over many shell functions or even many modules.
    • Fault #3: while the absolutely most-changed data (multiple times per day) was locallised to only two logical locations, the next 4 top-most changed data types (every week through every couple of months) were not co-located in any sensical number of locations.
    • Fault #4: data that was changed entirely infrequently (every year or two) was more localised than the most frequently changed data.
    • Fault #5: it was shell script fer cryin' out loud. ;-) Seriously - shell script means "no local variables". Everything is always global. Which makes it very dangerous to use new variables as they may already be in use if they are common names, or they're very long if they aren't common names. Ok, that may be a bit exaggerated, but you get the idea. Imagine no "my", "our", or "local" keywords in perl, and then you have the concept.
    Why was the original project so hard to change?

    See above. To change behaviour, I generally had to go through a dozen files (or more) to figure out the scope, even if I only had to end up changing a single file.

    Was there duplication?

    Somewhat surprisingly, no. The beginning phase set up a bunch of global variables, the middle phase used those variables, and the final phase ... also used those variables during cleanup. There was duplication that was outside of this project (that is, there was duplication of information, not code, between this project and other projects) which, due to the increased flexibility we now have, we don't need anymore (rather than hardcoding datapoint "X" in both the shell code and other code, we now have datapoint "X" in our data files, and use perl to extract it and generate the other code dynamically - still hardcoded, but a simple "make all" will get it all in sync).

    Was there scope for refactoring?

    I'm not entirely sure what this means. Probably not. ;-)

    How did you know what flexibility you needed to add to the second system?

    Years of experience with the original system seeing how requirements change over time, and seeing where requirements may change. Understanding the differences between limitations of the product space we're in vs assumptions based on the marketing decisions at the time. Rule #1 of the new code: no assumptions. We're not doing that perfectly yet, but I'm working on it.

    Were there requirements that weren't made explicit in the first system?

    At the time it was developed, long before I joined the company, the scope was incredibly small. So they did exactly what was needed at the time, no more. And it worked great. By the time I joined the team, it was already on the verge of bursting. But I didn't know that, so I kept using it.

    After a couple of years at this, I gained enough experience to be able to see the larger design. (Note how I'm not claiming that it's the perfect design, just larger.) As I said above, the language, which may have been sufficient when we started, was part of the limitations of the existing system (imagine a complex data structure in shell - ewwww!). So a rewrite was necessary anyway.

    The rewrite was a method by which we could gain the flexibility we required to meet needs that we often don't even know about until they're due. We've reduced the estimated effort (and, of course, the actual effort) required for changes by 50% or more on the development side, and we're working on the overall testing side as well.

    As to the refactoring as the simplest way to get things to work comment. I completely agree - growing and refactoring are awesome ways to develop flexible high quality applications. But that's not the simplest way to get the immediate job accomplished. That's the simplest way to get the long term unknowns accomplished, but not all of my management chain is enthused about paying for "possible future" enhancements when they get in the way of an upcoming shipment, despite the promise that changes required (whether before or after the upcoming shipment) will cost 50% to 300% more than if we spent an extra 10% now.

      Belated response....

      Sorry - my box of crayons seems to be missing that colour ;-)

      It's a sort of greeny pink :-)

      Fault #1: all data was embedded in the code.

      Of course that's not necessarily a fault in of itself. It's only a fault if it makes that data hard to change.

      Fault #2: data that was logically similar was not consistantly grouped locally - it was usually strewn over many shell functions or even many modules.

      Fault #3: while the absolutely most-changed data (multiple times per day) was locallised to only two logical locations, the next 4 top-most changed data types (every week through every couple of months) were not co-located in any sensical number of locations.

      Fault #4: data that was changed entirely infrequently (every year or two) was more localised than the most frequently changed data.

      That sounds like a lot of dodgy abstractions and duplication of responsibility to me. Time for some merciless refactoring.

      As perrin said YANGI is all about avoiding doing stuff until you actually need it - not avoiding it once you need it.

      Fault #5: it was shell script fer cryin' out loud. ;-) Seriously - shell script means "no local variables". Everything is always global. Which makes it very dangerous to use new variables as they may already be in use if they are common names, or they're very long if they aren't common names. Ok, that may be a bit exaggerated, but you get the idea. Imagine no "my", "our", or "local" keywords in perl, and then you have the concept.

      There you might have me :-) Early architectural decisions like platform and development language are hard ones to change after the fact. If you're stuck with a language with little support for higher level abstraction you're potentially heading for a rewrite.

      Of course there are strategies to help avoid this. For example:

      • Avoid developing in languages that will cripple you later on :-)
      • Address the problem early. I know when I've been in situations like this it's been obvious long before the political will was found that a move to another language was necessary.
      • Look for ways to move to another language incrementally while keeping things running. It can actually save time in the long term to add some code to the old system so you can decouple a few bits and change them incrementally.
      At the time it was developed, long before I joined the company, the scope was incredibly small. So they did exactly what was needed at the time, no more. And it worked great. By the time I joined the team, it was already on the verge of bursting. But I didn't know that, so I kept using it.

      Sounds like the classic Big Ball Of Mud. My sympathies.

      After a couple of years at this, I gained enough experience to be able to see the larger design. (Note how I'm not claiming that it's the perfect design, just larger.) As I said above, the language, which may have been sufficient when we started, was part of the limitations of the existing system (imagine a complex data structure in shell - ewwww!). So a rewrite was necessary anyway.

      Yeah, with a shell script you're pretty much stuffed :-)

      That's the simplest way to get the long term unknowns accomplished, but not all of my management chain is enthused about paying for "possible future" enhancements when they get in the way of an upcoming shipment, despite the promise that changes required (whether before or after the upcoming shipment) will cost 50% to 300% more than if we spent an extra 10% now.

      If you've not come across it already I've found the "Technical Debt" metaphor a really useful tool in helping management understand this sort of thing.

      "You Aren't Gonna Need It" doesn't work in isolation. You have to write clean code and refactor it if it gets messy. The code in your example sounds like it was allowed to become a ball of mud.

      YAGNI is about leaving out gold-plating and unnecessary features and abstractions, not about writing messy unmaintainable code. Clean code is something you are always going to need.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://455804]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others learning in the Monastery: (4)
As of 2024-04-23 05:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found