Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^2: Cyclomatic Complexity of Perl code

by simonm (Vicar)
on Dec 08, 2004 at 16:34 UTC ( [id://413251]=note: print w/replies, xml ) Need Help??


in reply to Re: Cyclomatic Complexity of Perl code
in thread Cyclomatic Complexity of Perl code

As stvn pointed out, the most effective way to analyze Perl software is not through static source code analysis, but by taking advantage of the dynamic nature to gather information at run-time.

Of course, that does mean that the expensive analysis software your company bought wasn't going to be any help, but it doesn't mean the task is impossible.

Looking at the output of the various profiling and coverage tools, it seems like you could perform a similar code analysis function with reasonable results by combining the source-code and run-time information.

Replies are listed 'Best First'.
Re^3: Cyclomatic Complexity of Perl code
by BrowserUk (Patriarch) on Dec 08, 2004 at 21:14 UTC

    I was commenting directly at the static method the OP asked about, and at static methods in general.

    The source code analysis software we were evaluating was for C; around in the early '90s; rejected completely; and probably died a death. The best tool I saw for code complexity analysis was a ICE based tool that operated on the compiled (with probes) executable that instrumented the entire program, not individual source files.

    The problem with instrumenting individual files is that it is pretty easy to ensure that each file has a very low complexity value. One easy step is to make sure that each file has almost nothing in it. But that is self-defeating if the number of source files trebles. Even if this simple step is avoided, the other sanctioned ways of reducing complexity result in much higher interface complexities, which is again self defeating.

    As a measure, complexity is a very bad indicator of anything useful. Some, (most) systems are complex by their very nature.

    A 747 is hugely complex, but would it be "better" if you removed all the dual and triple redundancy systems to reduce complexity?

    The most reliable car, is one without an engine. It's less complex, but is it "better".

    It's not just the case that it would be difficult to produce a measure of complexity for Perl source code, even by attempting to combine both static and dynamic measurements; you also have to consider what that measure would mean, if it was produced?

    Measuring the complexity of the source code, or how hard the runtime code has to branch to do what it does, has much more to do with the complexity of the problem being solved, than of the 'goodness' of the way the solution was written.


    Examine what is said, not who speaks.
    "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
    "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
    "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
      I was commenting directly at the static method the OP asked about, and at static methods in general.

      Understood -- but why assume that Cyclomatic Complexity or Interface Complexity *must* be analyzed statically?

      As a measure, complexity is a very bad indicator of anything useful. Some, (most) systems are complex by their very nature.

      I understand that metrics are often abused, but that doesn't mean they're useless.

      It would be foolish to try to maximize one metric without considering the tradeoffs -- as shown by your experience of driving down CC while allowing LOC to mushroom... But being able to compare several different implementations on the basis of multiple metrics seems helpful.

      The most reliable car, is one without an engine. It's less complex, but is it "better"?

      Of course you should include "do the tests pass" as one of your key metrics.

        Order! Oooor-deeer! I refer my honourable friend to the answer I gave a few moments ago. In particular, the fifth through second last paragraphs.

        I want metrics. I just feel (and have experienced), that code compexity as a metric is useless without some way to relate the complexity of the solution to the complexity of the problem it solves.

        For example: one of the sanctioned techniques for reducing complexity was to make the body of any block construct--while or for loop, or if/else--that itself contained any conditional code, a separate subroutine. Thus, the complexity of any subroutine was fixed in magnitude, because the depth of decisions points in any single subroutine was only ever 1.

        The argument went that this reduced the size of each routine and so reduced the complexity of the maintainance of that piece of code.

        The basis of the approach is: Brevity == clarity; a maxim that I whole-heartedly agree with, and which I hold foremost in my coding to this day.

        However, what it conceals is the increase in complexity that arises from ensuring that the subroutine bodies of all those extra levels of subroutines have access to the environmental state of the code from which they are called. Essentially, they need to have access to any local variables, parameters etc. that they would have had access to when coded in-line.

        That increase in interface complexity--and the increase in the numbers of subroutines that results from all those non-reusable subroutines, entirely negates the reduction in complexity of the parent subroutines.

        To even measure the effect, requires considerable effort in refactoring. So it's no good just saying that you need to combine both the measure of code complexity with the measure of interface complexity, in order to decide which is the better implementation. You have to produce both implementations of the same code in order to make your measurements.

        The chances are that the results of the combined measures will be much of a muchness. You've simply moved the complexities around a bit. But even if there is a clear winner one way or the other, both pieces of code do the same thing (assuming all function and integration tests pan out).

        So what did you achieve? Even if the over-moduralised version is easier to maintain--and the jury is still out on that call--then is that easier mainatainance worth the costs and efforts to make the determination?

        Now the sales pitch answer was that by doing the comparison, it was possible to determine which coding techniques, constructs and practices resulted in the easier to maintain code, and then use coding standards to enforce these be used on all new projects.

        Nice theory. I've read that you can get by in most languages by knowing 600 to 1000 words and a little grammer. I pretty much achieved this during my last overseas assignment. It allowed me to buy bread and beer,ask where the loos (toilets) were, how to get to the police station and many other every day tasks and chores. It took me 3-10 times as long to do it as it would in English, but I could get there.

        But try having a conversation beyond "Hi! How are you?". Or even understand the native language reply to that question and you will see the problem.

        So it is for programming. Restrict your use of a language to only that subset of the language's constructs and techniques that the Metrics say are easy to understand, and everything takes twice as long to write and 3 times as long to run.

        You cannot reduce the complexity of the problem, by measuring the complexity of the solution.


        Examine what is said, not who speaks.
        "But you should never overestimate the ingenuity of the sceptics to come up with a counter-argument." -Myles Allen
        "Think for yourself!" - Abigail        "Time is a poor substitute for thought"--theorbtwo         "Efficiency is intelligent laziness." -David Dunham
        "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://413251]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (3)
As of 2024-03-29 06:27 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found