Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??
I empathise ... It sounds like you have suffered from idiotic applications of misguided rules

No need for empathy. It was a long time ago, and we didn't suffer :) The result of the study, was that we rejected both the tool and the idea.

With respect to your statistics. One set thereof does not a case make.

I would derive two things from my reading of the numbers.

  • Three modules are heavily over commented.
  • Large modules are more complicated than small ones.

    What I cannot say from those numbers is whether the complexity arises as a result of

    • the needs of the algorithm required to perform the function of those methods.
    • because a bad algorithm has been used.
    • because a good algorithm has been badly implemented.
    • because the "complex" (read:big) methods do too much.

    Indeed, without inspecting the source code, I cannot even tell the accuracy of those metrics.

    It could be that "HitTheEdge" contains a recursive algorithm that is extremely complicated to follow and modify, but simple in it's code representation.

    Or that "GenMoves" is a huge if/then/else structure that would be better implemented as a dispatch table

    Or that "Slide" uses a string eval to replace itself with an extremely complicated subroutine that the source code analyser sees simply as a big string constant.

    And that's my point. You are already looking to derive further metrics from the generated metrics, but there is no way to validate the efficacy of those metrics you have, beyond inspecting the code and making a value judgement.

    So you already falling into the trap of allowing the metrics to become self-serving, but the metrics themselves are not reproducible, scientific measurements, they are simply "indicator values".

    When you measure the length, mass, hardness, reflectivity, temperature, elasticity, expansion coefficient etc. of a piece of steel, you are collecting a metric which can be reproduced by anyone, anywhere, any time. Even if the tools used to make the measurement are calibrated to a different scale, it is a matter of a simple piece of math, or a lookup table to convert from that scale to whichever scale is needed or preferred. This is not the case for any of the metrics in your table.

    You don't say what language your program is coded in but I could (probably) take all of your methods and reduce them to a single line. It would be a very long line, but in most languages it would still run perfectly well. What affect does that have on your metrics?

    Equally, we could get half a dozen monks (assuming Perl) to refactor your methods according to their own formatting and coding preferences and skill-levels. And even if they all do a bang-up job of making sure that they reproduce the function of your originals--bugs an all--and if we then used the same program to measure their code as you have used, they will all produce different sets of numbers.

    And that is the crux of my distaste for such numbers. They are not metrics. They do not measure anything! They generate a number, according to some heuristic.

    They do not measure anything about the correctness, efficiency or maintainability of the code that gets run, they only make some guesses, based upon the way the coder formatted his source code.

    1. They do not conform to any standards.
    2. They are not transferable between programmers.
    3. They are not transferable between algorithms.
    4. They are not transferable between programs.
    5. They are not transferable between languages.
    6. They are not transferable between sites.
    7. They are not tranferable between design or coding methods. (OO, procedural, functional etc.).
    8. They are not transferable between assessment methods or tools.

    In short, they are not comparable, and you cannot perform math with them.

    As proof of this, take a look at your "PieceToString" and "HitTheEdge" methods. They have an equal 'complexity' when measured by the same tool. Is this obvious, or even definable from looking at the source code? If I am given two pieces of steel 10 cms long, even without measuring them with a rule, I can easily tell they are the same length. No such comparison is possible for source code.

    The tool has become the only way of comparing source code, and as it does not (and could not) adhere to any standard, all measurements are relative, not absolute. So, unless everyone agrees on which tool/language/coding standards etc. etc. to use, there is no way to compare two versions of the same thing.

    That means that in order to make comparisons, you have to implement every possible (or interesting) version of the source code, before you can make any inference about whether any one is good or bad.

    And even if you could code every possible implementation of a given algorithm, and could prove that they all produced exactly the same results, and you generated your numbers: What would it tell you?

    Should you pick the version with the lowest complexity rating? The shortest? The longest? The one with the highest ratio of comments?

    Would you make any choice based on the numbers alone? Or would you have to look at the source code?

    If you admit that you would have to look at the source code, then you have just thrown your "metrics" in the bin in favour of your own value judgement.

    And if you didn't, then you should publish the formulea by which you are going to juggle all those numbers in order to make your decision. It should make for interesting reading.


    Examine what is said, not who speaks.
    Silence betokens consent.
    Love the truth but pardon error.

    In reply to Re^3: Cyclomatic Complexity of Perl code by BrowserUk
    in thread Cyclomatic Complexity of Perl code by ryanus

    Title:
    Use:  <p> text here (a paragraph) </p>
    and:  <code> code here </code>
    to format your post; it's "PerlMonks-approved HTML":



    • Are you posting in the right place? Check out Where do I post X? to know for sure.
    • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
      <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
    • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
    • Want more info? How to link or How to display code and escape characters are good places to start.
  • Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Domain Nodelet?
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this?Last hourOther CB clients
    Other Users?
    Others romping around the Monastery: (4)
    As of 2024-04-25 04:43 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?

      No recent polls found