in reply to Re^3: Multilevel flexibillity
in thread Multilevel flexibillity

I agree with you (and Abigail-II) that attempting to build flexibility when you don't need it is not a good idea. Never would have thought of disagreeing with that.

I am just saying that flexibility and complexity have a more complex relationship than just being traded off. If you attempt to achieve flexibility by embedding decisions everywhere in switches, well I guarantee it will always cost you. But I have seen many cases where you can both get simplify code and make it more flexible at the same time. Furthermore I think it important to point this out because in these cases the programmers often have trouble seeing the possibility because the choices you make seem counter-intuitive.

For a concrete example, take a look at Re: Re (tilly) 6: To sub or not to sub, that is the question? and compare the original and my rewritten version of get(). The rewrite is both shorter and more flexible. Furthermore with no visible code it manages to add a number of features that the author wanted.

Replies are listed 'Best First'.
Re: Re: Re^3: Multilevel flexibillity
by zby (Vicar) on Jun 26, 2003 at 08:54 UTC
    This is an attempt to formalize the argument of Abigail-II - it obviously has a flaw, but can be an starting poing for further analysis. First we need a to define the complexity of a design - I would take for it the Kolmogorov complexity (in Perl) i.e. the character count of the shortest Perl program complying with it. For the definition of design I would take an additionall set of rules that the program has to comply with.

    Now that is sure that a problem without any additionall requirements on the solution program is of less complexity than one with some additionall requirements. This of course holds when the requirement is the plugin architecture.

    The problem is if the Kolmogorov complexity is really the complexity perceived by humans.

      Considering the difficulty that humans have in telling whether they have the shortest solution (see your average golf game for evidence, or articles that can be found here for the theory), it is certain that the Kolmogorov complexity is not perceived by humans. Underscoring that is the fact that things which bring you closer to that ideal solution often make the code harder to understand, not easier.

      Furthermore you are attempting to specify the complexity of the design used to satisfy the requirements in terms of the requirements given. But haven't you ever seen two pieces of code designed to do the same thing of vastly different complexity?

      My understanding of the issue is rather different. Mine is shaped by an excellent essay by Peter Naur (the N in BNF) called Programming as Theory Building which I was was available online (I read it in Agile Software Development). It isn't, so allow me to summarize it then return to my understanding.

      Peter's belief is that the programmer in the act of programming creates a theory about how the program is to work, and the resulting code is a realization of that theory. Furthermore he submits (and provides good evidence for) that much of the way that other programmers do and do not successfully interact with the code may be understood in terms of how successful they are at grasping the theory of the code and working within that. For instance a programmer without the theory will not be able to predict the correct behaviour. The programmer with will find that obvious, and will also have no trouble producing and interpreting the relevant piece of documentation. The mark of failure, of course, is when the maintainance programmer does not grasp the theory, has no idea how things are to work, and fairly shortly manages to get it to do various new things, yes, but leaves the design as a piece of rubble. Therefore one of the most important software activities has to be the creation and communication of these theories. How it is done in any specific case need not matter, that it is done is critical.

      So a program is a realization of its design, which functions in accord with some theory, and the theory needs to be possessed by developers (and to some extent users) to understand what the program is supposed to do, and how to maintain it. How does this shed light on the problem of a plug-in architecture?

      It is simple. A program with an internal plug-in architecture is a program whose theory embodies a generalization. Adding the generalization takes work, yes. But with the right generalization, many things that your theory must account for become far easier to say. (The wrong generalization on the other hand...?) If you have enough special cases that are simplified, the generalization pays for itself, and being able to find then work with such generalizations is key to being able to work efficiently. Just like a self-extracting zip can be shorter than the original document. There is overhead to including the decompression routine, but it saves you so much that you win overall.

      Of course I am describing what can happen, if things turn out right. Generalizations are not always good. To the contrary when used by overenthusiastic people, they often become conceptual overhead with little return. (You are in a maze of twisty APIs, all alike...)

        Thanks for great theory about understanding programs - I think I'll start using it in practice just today. This is something we all know, but never formulate it accurately.

        But still we don't have any definition of what is human perceived complexity of software. I see one candidate - the number of axioms in the program theory. But again when we add requirements we cannot hope to make the number of axioms lesser.

        Actually the only way to measure the complexity of design is, for me, by taking the lowest bound of the complexity of programs complying with that design. Adding requirements to the design can only result in a subset of the programs complying with it - thus it can only make the complexity bigger, never lesser. No matter what measure for program complexity we use.

        Update: A quote justifying Kolmogorov complexity from the Abstract of On the intelligibility of the universe and the notions of simplicity, complexity and irreducibility by Gregory Chaitin (found via the link you provided):

        (...) we defend the thesis that comprehension is compression, i.e., explaining many facts using few theoretical assumptions, and that a theory may be viewed as a computer program for calculating observations. This provides motivation for defining the complexity of something to be the size of the simplest theory for it, in other words, the size of the smallest program for calculating it.

        I recommend the whole article (there is some comment on Wolfram works too).