in reply to Psychic Disconnect and Object Systems
Invariably, encouraged by many OO languages & frameworks and almost all OO teaching texts, people start out to define their objects by defining their attributes. This is, in my opinion as a result of my personal experience of using OO & of working with others OO, completely the wrong way to approach the problem.
When you write procedural code, you don't start by writing a huge block of variable declarations at the top of the program, subroutine or module. Not even in those lanaguages that require declarations to come at the top of the block. You start by writing the algorithm, and then go back and declare your variables as you need them.
The right way--I know, I know. bear with me.--is to define the methods first. Ask yourself not: what do these objects contain? But rather: what do these objects need to do?
The second mistake even very experienced OO programmers and architects seem to make repeatedly, is to subtly alter that (second above) question, and instead ask: What could these objects do? And that leads to all sorts of O'Woe.
The right way--again--is to write the code that will use the objects first. And I'm not talking about cutesy ok/nok unit tests either. I mean the actual code. Just create an instance of your object by calling the classes new() (or whatever constructor name makes sense) method with no parameters and then do what you need to do with it in the context of the calling code.
Out of this falls the methods (and sensible names) you need to call, along with many of the arguments those methods will require. In the writing of the calling code, the interface will change, change again, maybe change back again. Some methods will get broken up into two (or more). Others may be combined. You (I anyway) often find that things (values, constants etc.) that I initially thought belonged inside the the class instance, are actually application specific not class/object specific and so don't belong in then instance. And, much more rarely, vice versa.
Once I'm reasonably happy with the calling code for the class, I can then move on to writing the methods that it uses knowing not just what they should do, but how they will be called. Which makes writing them much easier. And when writing the methods, it becomes clear what attributes are required. And becomes much easier to see which attributes need to be stored in the instance data; and which can be generated on the fly. And you should never store a attribute if it can be reasonably regenerated from other attributes.
Working top down this way, means that I can concentrate on writing the code that is actually needed rather than trying to second guess what might be needed.
Only once the first draft compiles clean--and preferably can be exercised, though that is often not possible without expending huge effort trying to mock shit up around it--do I then look at the interface with a 'what if another application might want to reuse this class some day' to see if there is anything that can obviously be made more general without compromising its usability/maintainability/performance for this application too much.
An interesting side-effect of this is that you rarely end up with externally visible accessors to your attributes--which is a mighty good thing. And if you apply good logic to it, using internal accessors for attributes that have no external visible interface makes no sense at all. Which make auto-generation of accessors a complete waste of time.
Sitting down to write a new class definition before you've written and therefore understand--because you do not before--how it will be used, is fundamentally flawed. It just means you are second guessing the real world, and that leads to whatifitis.
And trying to decide what attributes are needed by an object, before you have a clear idea of both the class interface (methods); and their implementation requirements & costs; is insane.
But almost none of the OO texts I've seen teach people to work that way, which may or may not give you a clue as to how much weight you should give to my opinion :)
BTW:Read-only attributes are otherwise know as constants--and are better defined as such. Except for the rarely implemented concept of externally read-only attributes which are internally read-write. That is, an attribute that is modified internally as the object evolves; that can be read directly from external sources.
But does anyone use them? Does any OO framework support them? Mostly a getter (no setter) is defined to prevent direct access; and most times it is better to simply (re)calculate the value on demand rather than storing it.
|
|---|