I've decided to make one last ditch attempt at getting co-workers to apply better practises to work. To date, I'm the only one to write test suites and do any form of defensive programming. I've tried to win people over to better styles through informal conversations but most have confessed that they don't see the point to it all.
However, I have finally managed (after years!) managed to get a weekly development meeting going with agendas etc. So I'd like to give it another shot. I'm aiming to do a mini presentation on the whole thing. To help with that
I figured it would be useful to have a general script that I can print and give out (for people to read later). I'd take this longer prose and summarise it into key bullet points and hopefully encourage people to read the fuller thing (or failing that give me a script to work to). I also want to produce a simple project that starts as a ball of mud and has tests and defensive programming applied.
I've got a wordy first draft which is heavily stolen from books and sites such as refactoring.com and code complete. I'm not bothered about that too much but I would appreciate feedback on what to add and how to construct the demo app. Also, if anyone has done anything like this before and has tips to share - I'd be grateful!
Design, Implementation and Refactoring
Design is considered a 'wicked problem'. In many cases, to produce
a design the problem is solved twice. To produce the design, the problem
is solved (even if only in part) and then solved again to prove that
the solution works. In fact, it may be that only once the problem is solved
that side problems emerge. Their existance proving to be unknown until
the original problem had been worked on.
Whatever the process, a design is produced via a combination of heuristic
judgements, best guesses and assumptions. Many mistakes are made in the
process of the design because of this. In fact, a good solution and a sloppy
one may differ only in one or two key decisions or perhaps choosing the
right tradeoffs.
Because this, good designs evolve through meetings, discussions and
experience. In some cases, they also improve through partial implementation
(hence the 'wicked problem' moniker).
A design, to stand a chance of working, should also restrict possibilities.
Because time and resources are not infinite the goal of the design should be
to simplify the problem into an acceptable form for implementation. Not
all processes for this are the same. Each new problem introduces an entirely
new set of variables. Failure to recognise this can result in the wrong
technique, tool or process being applied.
The success of the implementation can be measured in different ways. Glibly,
it can be measured as 'it does what we want' and is then left to rot until
the next problem occurs. Many projects associated with problems fail due to
poor management, requirements (etc) but equally many (especially software
projects) fail due to complexity.
Managing complexity is a key factor in ensuring success. If a solution is too
complex (either by design or evolution) then it becomes increasingly impossible
to maintain. This is a major source of cost and resource overhead.
Complexity can arise in these simple cases (say):
- A complex solution to a simple problem
- A simple, incorrect solution to a complex problem
- An inappropriate, complex solution to a complex problem
Managing this allows many design considerations to become much more straightforward.
Characteristics of a good design:
- Minimal complexity
- Ease of maintenance
- Loose coupling
- Extensibility
- Reusability
- High use of low level utility (software design)
- Low level fan out (software design)
- Portable
- Lean
- Layered (predominantly software design)
- Standard techniques
Over time, common solutions to common problems emerge. These common solutions are
reasonably abstract. Enough to be applicable in a general sense but specific enough
that they can be recognisably applied to the solution. Application of common solutions
(or patterns) can achieve many of the above characteristics. Unfortunately, they
are not always applied (or worse applied incorrectly) because of the reasons
already stated.
It is often the case that the first implementation
(or even the first few of many) are not easily maintainable, simple or reusable.
Rather than stay stuck in the design process, it can and is more advantageous to
take a pragmatic approach. As stated, it can be impossible to completely solve a
problem satisfactorily without first solving it.
Here, the best approach is to attempt to make the best decisions possible at the
time. Then, re-examine the problem and solution for signs of a good and/or bad
design. With the experience of the implementation, improvements can be easier to
spot and mistakes easier to find. This process is called refactoring. Refactoring
can include shifting to patterns or between patterns where applicable, removing
now uneeded functionality or reducing complexity.
By refactoring, we examine what we thought we knew, what we tried and what actually
happened and try to make it
- simpler
- easier to maintain
- reusable if possible
- easier to understand
Even achieving only one of these can be critical to the long term success of a project.
While implementing, analysing and refactoring a solution to a problem, it is important
to be able to prove your solution works as promised. Critically, it is also important
to be able to proce what happens to your solution when things don't go to plan. Understanding
your corner cases, the limits of your inputs and outputs will test your assumptions. It
will also provide for planning for and mitigating unforseen circumstances (at least as
much as possible).
In software design, software testing is used to provide this metric. By tieing the development
of tests directly to the implementation of the software solution, the solution is built in
parallel to the tests that prove the solution works. Aside to the benefits above, this testing
also provides
- a test of the design at a low level (how it works, couples, simplicity etc)
- proof that changing one part of the system hasn't broken another part of the
same system
Accepting that software must evolve as requirements change and the complexity of the solution
changes mandates that software testing be included in the production of any solution right
from the outset. This testing allows the assumptions to be checked and rechecked even before
higher level testing is considered. After all, if the software doesn't work as advertised there
is no point arranging usability and acceptance testing. If you don't know how your software will
fail there is no point putting it up for client review.
In producing your implementation, it is also important to code defensively. Rather than assume
that resources are available (say) you test your assumptions are true before working with them.
By doing so, you produce a simple first candidate area for your software tests. If you are working
with a resource and it 'goes away' you can produce a test that re-enacts this scenario. Once this
test is written, you work with your code until all the problems are fixed, handled appropriately or are
documented. In this simple process of defensive programming coupled with testing, the reliability
of the solution should dramatically increase.
Testing can also form part of the installation process. Deployment of the solution needs proof
that it is installed and operating correctly. As a suite of tests and benchmarks have already
been produced, what better method of proving the deployemnt is ready for acceptance testing?
Summary
- Examine the problem simply
- Worry about only what you need to implement
- Implement it as best you can
- Rexamine the solution and the problem
- Repeat
This can be helped with defensive programming, pragmatic design, refactoring to common solutions
where appropriate and testing at all stages of implementation.