Re: Beyond Agile: Subsidiarity as a Team and Software Design Principle
by BrowserUk (Patriarch) on Jul 22, 2015 at 02:27 UTC
|
Basically the waterfall approach is intended to apply civil engineering practices to software
That's certainly one of the problems that causes the waterfall approach to fail:
- Mechanical engineering practices in general fail with software development because hardware deals with real, finite and deterministic units, measures and quantities.
Software has few if any reliably measurable metrics;
- Even the most complicated of machines -- the space shuttle for example -- has a few million parts.
Even fairly modest software projects can have 10 or 100 times as many 'parts'.
- Mechanical engineering projects can afford to spend big on development and testing.
Infrastructural projects like roads, bridges, tunnels, railways, dams, houses and factories have serviceable lifetimes measured in multiple decades if not centuries; thus their development costs are amortised of those long lifespans.
Industrial projects (turret lathes; tunneling machines; welding robots; aircon plant; windmills; aircraft etc.) amortise their costs over the production of millions of units of consumer goods or services.
Consumer goods (cars; phones; light bulbs; printers; fridges etc) amortise their development costs over millions of units sold.
Most software projects are one offs; and often associated with cost centers rather than profit centers. Ie. Administration rather than production.
- The specifications for hardware can most times be definitively written down.
If a bridge is too short...; a plane doesn't have enough fuel capacity ...; a dam can't hold back the water ....
Software is nearly always far harder to specify.
But that is far from the only reason:
- The single biggest problem with the waterfall methodology, is the belief by the analysts that they can construct a definitive specification up front.
It has been proven time and time again that even the best analysts cannot fully specify anything more than the most trivial of software systems.
And the moment you recognise that, waterfall is inevitably indicted as the root cause behind huge numbers of software projects going back 40 years or more.
- Waterfall 'works' in the same way that erosion works: it gets there in the end.
But unless you're writing software that will not be needed for 5 years; and will stay in service for 30 -- and almost nothing does these days -- by the time it gets there; the world has moved on. Twice if not three times.
- Waterfall is a management heavy process from a bygone era of demigods and vassals. And it didn't even work well when that was an acceptable working practice.
Anyone still advocating waterfall as a workable methodology in the modern world simply has no stake in anything real and current. They are living in a rose tinted vision of a misunderstood past.
- There are perhaps a few dozen projects that have the funding and time and reason to be developed that way in today's world.
Aircraft & space craft control systems; nuclear power plants; medical monitoring systems; military projects. Projects with huge budgets, very long lead times and failure-is-death criteria.
Your website, database, word processor, browser, compiler, PoS, phone, washing machine, even banking systems do not have either those kind of budgets nor those kind of reasons.
Better to get something working next week, find its weaknesses the week after and improve/fix them the week after that, and iterate that RAD process 6 times; than spend 6 months trying to definitively specify every last function and feature, and another 6 months getting to the point where you have something to test, only to discover your data gathering was flawed, your guesses were inaccurate, your assumptions wrong and your vision of what the customer needs is completely different from what they actually require.
But, you don't need stand up meetings or scrum masters or story boards etc. to achieve fast feedback RAD development either. Pretty much all you need is a customer's advocate with the authority to conduct regular, hands-on progress inspections; challenge decisions; and require changes.
Beyond that, there are many ways of running things -- pair working or peer reviews; tests first or mock-up and comply; continuous builds or weekly commits -- some are more appropriate to some types of software; others to others.
Strong technical leadership is good; overbearing micromanagement is bad; non-technical bean-counting counter-productive; automated bean-counting a piss-poor substitute for open and blame-free peer support and review.
Guidelines are good; blind adherence (to anything) is bad; manifestos, buzz-word compliance, cheat sheets and road-maps sops to being seen to be following 'the process'.
Waterfall has all the bads; and none of the goods. In either sense of that last word.
| [reply] |
|
With due respect that actually misses the problem.
Waterfall works when cost of failure is high (particularly when lives are at stake) and uncertainty in specification is low. There are cases where those criteria are met but those are usually not what we think of when we talk about software. The software that runs the space shuttle, avionics, radiotherapy controls, etc. are all examples where waterfall *is* appropriate in my view. If a stack overflow gives someone radiation poisoning, causes a plane to crash, or a car to accelerate out of control, this is a very different situation than "I want to track what I do for my customers."
Rather the problem is with the type of problem being solved. If you are solving a clear, precise, technical problem the considerations are different than if you are solving a business process problem which will probably be transformed in unpredictable ways by the tool you are writing to solve it.
I think we would both agree that these problems become less when components are more clearly segmented by team and by responsibility and so bounded complexity goes down?
| [reply] |
|
| [reply] |
|
|
|
Re: Beyond Agile: Subsidiarity as a Team and Software Design Principle
by chacham (Prior) on Jul 21, 2015 at 12:56 UTC
|
Nice article, though i completely disagree. While Waterfall may have some problems, there's no reason to throw out the baby with the bathwater.
The Waterfall method is not just to get requirements and start coding. Waterfall works within the overall code writing process, which includes designing the code itself. Pseudocode is the best way to do it, though it unfortunately is unpopular. The idea is, get the requirements, find a solution (specification), design the code (pseudocode), and then code flawless software.
The major issue here is not the coding. It is that the customers rarely know what they want. They think they do, but they don't. They'll know it when they see it. So, they sign off on a requirements document, without really knowing what the product will be. The issue, however, is not the requirements document. It's that the requirements and specifications are often converged into one document. That's bad.
The solution to that is to have both a requirements document and a specification document. But, what are they? A requirements documents states the problem that needs solving, and a determination of what is required to solve it. Hence the name requirements. It is written by the customer, as it is their way of communication what they need. For example: we need an accounting system to handle internal spending. Then go into details about the requirements, not the solution. Meetings are then had to understand the requirements. When the document is completely understood, it should need no further change.
The specification document is completely different. First, it is written by the development (or UI) team. It specifies a solution, including things like mockup screenshots and anything else that the customer will interface with. No backend information need be mentioned, unless the customer will be using that too. This document specifies how it will respond and what it will produce.
Often, this is called a Use Case document. However, Use Cases are subsidiary to the specification. The specification mentioned every aspect, whereas Use Case to a subset of them. Though, enough Use Case documents could be used in lieu of a specification.
The specification document generally goes through many cycles. Alternatively, mockups can go through the cycles with a specification document written at the end. Either way, there are many cycle. The Agile method has much to offer here. Design the UI quickly, and with the customer involved every step of the way. The customer will see it, experience it, and respond. Now the customers know what they want, and can agree to the final look.
After that, design is done. Design the data model, if there is one, and how the code will work. And go from there. As the customer doesn't care how it works, reviewing the various phases with the customer achieves nothing. Just release it when it is done.
Overall, Waterfall works well. However, if the steps are not followed, nothing will work. All Agile does is minimize the damage, but takes longer and works against consistency in the backend.
| [reply] |
|
I think there are two problems with the waterfall approach. The first is that it separates either by team or by time design from development. One thing that is hard to learn except from experience is where and when things should be designed first or left for later. This doesn't mean always one or the other....
The second is that requirements always change, but they don't always change in the same way. Again, this is domain-specific. So the waterfall method as I point out, is applicable in some cases.
What I see you say though is that one does need to design before coding. This I agree with. That a waterfall in miniature is helpful. This I agree with too. So I am not actually sure where we disagree.
The point of subsidiarity is to ensure that design and coding are closely tied both by team and by time, and that the pieces are small enough that the design can be done right. That has a lot in common with both waterfall and agile methodologies.
| [reply] |
|
I disagree with the contention that requirements always change. I see it as a euphemism used to excuse laziness in gathering requirements.
The requirements are what they are. In general, it is not a wishlist, rather, a specific problem or situation arises that requires resolution. Usually, that does not change.
The changes i have seen have to do with the UI. They want it to do this, or because they had not seen it, they didn't realize they also needed that. The requirements haven't changed, the UI however, needs more revisions.
In some cases, the requirements change often, because the situation demands it. In those scenarios, it is more efficient to follow the Un*x paradigm of separate tools. Each one, however, would be design completely before being coded.
| [reply] |
|
|
|
|
|
|
|
|
|
|
|
|
|
I see Waterfall as an unsuited methodology for software development. I see Agile as an improvement. In my experience, Waterfall does not work well because requirements are rapidly changing. Yes they are. Yes they are.
The idea is to simply decrease the amount of time it takes to deliver features and fixes by reducing the iteration cycle to adjust for these rapidly changing requirements we software developers face. Stake holders must be on board, for it is they who sign off on which features and fixes to prioritize. Automation and tests with high value are also key.
But let's check some history, shall we? Read up on Winston_W._Royce , the important quote i wish to point out is 'According to Royce in the process model "the design iterations are never confined to the successive step", and for that model without iteration is "risky and invites failure". As alternative Royce proposed a more incremental development, where every next step links back to the step before.'
And isn't that what Agile strives to be? More incremental? More immediate feedback? This is just evolution of software deployment.
jeffa
L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)
| [reply] |
|
I see Waterfall as an unsuited methodology for software development. I see Agile as an improvement. In my experience, Waterfall does not work well because requirements are rapidly changing. Yes they are. Yes they are.
But they aren't always. Consider embedded development of computers controlling cars, or things like control software for radiotherapy. Bugs there can cost lives and the requirements are well defined, so full-blown waterfall processes are appropriate.
Moreover they aren't rapidly changing for all parts of a project. Usually with a little experience you can identify those areas where requirements are likely to change most. If everything else is well componentized, up-front design really isn't a bad thing. And if the components are small, iterations don't need to be long.
That's why subsidiarity is important, because it focuses on the design and development of small components by small teams, not big nebulous projects by very unstructured teams. A project may have a part of that but the more contained your rapidly evolving requirements are the quicker you can deliver them.
| [reply] |
A reply falls below the community's threshold of quality. You may see it by logging in. |
Re: Beyond Agile: Subsidiarity as a Team and Software Design Principle
by locked_user sundialsvc4 (Abbot) on Jul 21, 2015 at 12:15 UTC
|
This continues to be a most interesting episodic series of articles. Thank you for continuing to post them.
This is also one of the first “articles on Agile” that tacitly acknowledges some of its deepest flaws, and that offers up a possibility for correcting them. While I generally agree with the stratagems and professional perspectives you have listed here, and so I am not rebutting them, I still come away with the feeling (and it has grown over the many years now), that the much-maligned “Waterfall” was probably, in a great many ways, right! The construction of a great many tall and heavy things follows this, usually by-law. The construction of things that don’t move, and especially, the construction of things (such as software and other big, dangerous machines) that do.
There is definitely a political problem with software: for a long time, the team doesn’t appear to be visibly doing something, and management naturally wants to see “code writers” “writing code.” Nails being hammered into boards. Concrete being poured. (And they’d better not see a jackhammer being taken to that new slab ...)
But in reality ... and this goes back to the Managing the Mechanism e-book (Kindle, iOS) ... software is actually a machine with near-infinite internal complexity. Everything is, or can be, “coupled” to everything else. Yes, you can have committee meetings about what color a particular room screen should be painted, but not much more. Analogs to building-construction also fall flat: buildings don’t move.
I also find that it is important to separate coding from testing. A lot of development may be done before it is possible to do more than superficial testing, because the moving-parts that are to be moving-together must of course first be built. The overall mechanism must be broken down into sub-systems and these must be tested: both unit-level and integration testing must proceed continuously. You cannot do this unless the completed-and-tested parts do integrate with one another, and will integrate with what is yet to come, all without ever breaking. You can build things in-parallel, but you absolutely can’t design them that way. Having separate design teams forces the two teams to communicate, which forces the design itself to be in a communicate-capable form.
And this one key point inevitably becomes clear: there must be a master blueprint, a master design document that is built first(!) and that does not(!) change. You can sneak-around that requirement if you want to, but the teams ... no matter how many teams there are or aren’t ... do not “snap into rhythm and start pulling together as one” until that over-arching, master design is clear to all and stable. Anarchy is no substitute for design.
“I See Dead Projects,™” and most project failures or near-failures come from process problems only.
Because of the near-infinite complexity of the thing, computer software suffers from “the Jell-O® Problem.” When you so much as touch it, anywhere, it wiggles everywhere. If the design of the entire thing is not pretty-much known in advance, and adhered to, the situation becomes even worse: you are now ripping parts out, after the design changed on-the-fly (or, was arrived-at on the fly), and it is basically impossible to know where the dependencies and repercussions will be. I personally think that Agile, as practiced, makes the problem much worse for two reasons: (1) it encourages practices that rob the structure of its internal stability, and (2) it sets the expectation, both in the management and in the team(s), that this is “the right thing to do.”
We see an even-more insidious manifestation of this problem today in mobile applications that are being built using rapid-development frameworks such as Ionic or Phonegap. Apps are being built “rapidly,” mostly by amateurs, and, even though the amount of code that the team has written is comparatively small, the source-code base of which the application actually consists might be > 120,000 lines. There is a frightening level of internal complexity there, but superficially the presence and the implication of that complexity is not recognized. Change is made like the (bad) cell-phone commercial: “can you hear me do you like it now?” After a surprisingly-few iterations of this, the entire application is a brittle pile of broken glass. Not a “tool problem.” A “process problem,” but one that the tool encourages.
I like this article, nevertheless, because you deal with the realities vs. the theory or the “principles.” What is actually done in, and by, a seasoned and hard-working professional team. Versus what the manifestos say. Again, thanks for sharing.
| |
|
This continues to be a most interesting episodic series of articles. Thank you for continuing to post them.
I think you mean eyepopslikeamosquito's series. This seems to be einhverfr's first meditation on the subject.
Also, I disagree with or would offer different rationale for most of you said. Quick examples: Testing can begin before any code is written. Your fluff about the necessity of integration is in complete contradiction to the definition of "unit" testing.
Anarchy is no substitute for design.
And yet for thousands of years of human history we said exactly the opposite about Nature because anarchy produces streamlined, highly-tailored, efficient results given the time and capital^W resources. It's rigid master plans that fail predictably. "Tactics" ne "Strategy".
| [reply] [d/l] |
|
I would agree that anarchy is no substitute for design. You can't have top-notch security without design for example.
The problem is that usually we are given a false choice of "let's do the blueprints first and fully design everything to the smallest detail and then code" and "let's just start coding." In reality the choice isn't between micromanagement and anarchy, but between segmented responsibility with coordination and either of those two. In my view, segmented responsibility is the clear winner.
| [reply] |
|
A reply falls below the community's threshold of quality. You may see it by logging in.
|