http://qs1969.pair.com?node_id=413367


in reply to Re^3: Cyclomatic Complexity of Perl code
in thread Cyclomatic Complexity of Perl code

I was commenting directly at the static method the OP asked about, and at static methods in general.

Understood -- but why assume that Cyclomatic Complexity or Interface Complexity *must* be analyzed statically?

As a measure, complexity is a very bad indicator of anything useful. Some, (most) systems are complex by their very nature.

I understand that metrics are often abused, but that doesn't mean they're useless.

It would be foolish to try to maximize one metric without considering the tradeoffs -- as shown by your experience of driving down CC while allowing LOC to mushroom... But being able to compare several different implementations on the basis of multiple metrics seems helpful.

The most reliable car, is one without an engine. It's less complex, but is it "better"?

Of course you should include "do the tests pass" as one of your key metrics.

Replies are listed 'Best First'.
Re^5: Cyclomatic Complexity of Perl code
by BrowserUk (Patriarch) on Dec 09, 2004 at 01:15 UTC

    Order! Oooor-deeer! I refer my honourable friend to the answer I gave a few moments ago. In particular, the fifth through second last paragraphs.

    I want metrics. I just feel (and have experienced), that code compexity as a metric is useless without some way to relate the complexity of the solution to the complexity of the problem it solves.

    For example: one of the sanctioned techniques for reducing complexity was to make the body of any block construct--while or for loop, or if/else--that itself contained any conditional code, a separate subroutine. Thus, the complexity of any subroutine was fixed in magnitude, because the depth of decisions points in any single subroutine was only ever 1.

    The argument went that this reduced the size of each routine and so reduced the complexity of the maintainance of that piece of code.

    The basis of the approach is: Brevity == clarity; a maxim that I whole-heartedly agree with, and which I hold foremost in my coding to this day.

    However, what it conceals is the increase in complexity that arises from ensuring that the subroutine bodies of all those extra levels of subroutines have access to the environmental state of the code from which they are called. Essentially, they need to have access to any local variables, parameters etc. that they would have had access to when coded in-line.

    That increase in interface complexity--and the increase in the numbers of subroutines that results from all those non-reusable subroutines, entirely negates the reduction in complexity of the parent subroutines.

    To even measure the effect, requires considerable effort in refactoring. So it's no good just saying that you need to combine both the measure of code complexity with the measure of interface complexity, in order to decide which is the better implementation. You have to produce both implementations of the same code in order to make your measurements.

    The chances are that the results of the combined measures will be much of a muchness. You've simply moved the complexities around a bit. But even if there is a clear winner one way or the other, both pieces of code do the same thing (assuming all function and integration tests pan out).

    So what did you achieve? Even if the over-moduralised version is easier to maintain--and the jury is still out on that call--then is that easier mainatainance worth the costs and efforts to make the determination?

    Now the sales pitch answer was that by doing the comparison, it was possible to determine which coding techniques, constructs and practices resulted in the easier to maintain code, and then use coding standards to enforce these be used on all new projects.

    Nice theory. I've read that you can get by in most languages by knowing 600 to 1000 words and a little grammer. I pretty much achieved this during my last overseas assignment. It allowed me to buy bread and beer,ask where the loos (toilets) were, how to get to the police station and many other every day tasks and chores. It took me 3-10 times as long to do it as it would in English, but I could get there.

    But try having a conversation beyond "Hi! How are you?". Or even understand the native language reply to that question and you will see the problem.

    So it is for programming. Restrict your use of a language to only that subset of the language's constructs and techniques that the Metrics say are easy to understand, and everything takes twice as long to write and 3 times as long to run.

    You cannot reduce the complexity of the problem, by measuring the complexity of the solution.


    Examine what is said, not who speaks.
    "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
    "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
      I really like the last sentence.

      These are my notes on complexity from reading The Mythical Man Month:

      Software entities are more complex for their size than perhaps any other human constuct, because no two parts are alike. If they are, we make them into one.

      The complexity of software is an essential property. Hence descriptions of a software entity that abstract away its complexity often abstract away its essence.

      /J