in reply to Re^4: Developing a module, how do you do it ?
in thread Developing a module, how do you do it ?

Do these ideas make any sense?

To me, no.

But *I* am the square peg in this. It seems that most people have, like you, bought into the cool-aid. I'm not here to tell you how you should do it; just to let you know that there are alternatives and let you reach your own conclusions about what fits best with your way of working. Good luck :)


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^6: Developing a module, how do you do it ?
by chromatic (Archbishop) on Mar 03, 2012 at 03:59 UTC
    I've seen people go to extraordinary steps to satisfy the coverage tools need to have every code path exercised, even when many of those codepaths are for exceptional conditions that are near impossible to fabricate because they are extremely rare ("exceptional") conditions. Hence you have a raft of Mock-this and Mock-that tools to "simulate" those failures.

    I agree, but don't replace crazy with crazy.

    Write tests that provide value. Use coverage to see if you've missed anything you care about. Think about what that coverage means (all it tells you is that your test suite somehow executed an expression, not that you tested it exhaustively).

    If you're not getting value out of these activities, you're probably doing something wrong. That could mean fragile tests or tests for the wrong thing. That could mean that you have a problem with your design. That could mean too much coupling between tests and code or too little (and too much mocking).

    There's no substitute for understanding what you're doing, so understand what you're doing.

    (But don't resolve never to use a lathe or a drill press because you heard someone once used one somewhere for eeeeeevil.)

      There's no substitute for understanding what you're doing, so understand what you're doing.

      (But don't resolve never to use a lathe or a drill press because you heard someone once used one somewhere for eeeeeevil.)

      I utterly agree with both those statements.

      Unfortunately, understanding takes time and practice and a few different projects and types of project, before the patterns from which understanding forms become evident. As a substitute, society tries to teach experience; but that is a very hard thing to do. So, you end up with guidelines that omit the reasoning, and so become unassailable dogmas. Hence, I received a recruiter circular a few months back that asked for "Perl programmer experienced in coding to PBP/PerlCritic standards."

      (Really. Honestly. I just checked to see if it was still hanging around in my email somewhere but it isn't :( )

      With respect to coverage tools. If a module is big enough that I need a computer program to tell me if I've covered it sufficiently with tests, it is big enough that it will be impossible for a human being to get a clear enough overview to be able to successfully maintain it. It is therefore too damn big.

      But then, for any given problem I tend to write about 1/10 of the code that the average programmer seems to write. Mostly because of the code I don't write.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

Re^6: Developing a module, how do you do it ?
by tobyink (Canon) on Mar 04, 2012 at 02:00 UTC

    And some of those .t file are huge and have hundreds of tests.

    So the tests are "huge" yet you want to put them at the end of the module, and force perl to parse them each time the module is loaded?

    It will tell you test number 271 failed. If you are lucky -- and you need to jump though more hoops to make it happen -- it might tell your in what .t file and where the failing test happened.

    The Test::More/Test::Simple family of modules report a lot more than that on a test failure. At a minimum they report the test number, line number and filename where the test occurred.

    The documentation for these testing modules strongly encourages you (see footnotes in "readmore" section below) to give each test a name. e.g.:

    is($obj->name, "Bob", "object has correct name");

    If that test fails, you get a report along the lines of:

    ok 1 - object can be instantiated
    not ok 2 - object has correct name
    #   Failed test 'object has correct name'
    #   at t/mytests.t line 12.
    #          got: 'Alice'
    #     expected: 'Bob'
    ok 3 - object can be destroyed
    

    This makes it pretty easy to see which test has failed, and why it's failed.

      So the tests are "huge" yet you want to put them at the end of the module, and force perl to parse them each time the module is loaded?

      Many of the modules on CPAN have huge .t files, because that's what the tools they use force them into writing. But I don't use tools that require me to write a dozen lines of test code to test one line of code.

      And I guarantee that, even with the tests -- which could be Autoloaded if they became a drag on performance -- not one single module of mine takes 1/1000th of the time to load that your Reprove module takes. Not one.

      ... (just search for the name) ...

      So now you've got to invent names for all your tests. Just so you can search for that name to find the test?

      That is asinine make-work.

      If I use die. It automatically "names" the test, with the file and line number.

      In a single line format that my editor (and just about any other programmers editor worthy of the name) knows how to parse right out of the box.

      And if I use Carp::cluck or Carp::confess, I get full trace-back, each line of which my (and any) editor knows how to parse.

      And if I need to add extra trace to either the tests, or the code under test, in order to track down the route to the failure, I can add it temporarily, without needing to re-write half the test suite to accommodate that temporary trace.

      Or I can use Devel::Trace to track the route to the failure; or the debugger; or Devel::Peek or ...

      And if I need to pause the test at some point -- for example, so that I can attach a (C-level) debugger -- I can just stick a <STDIN> in there.

      Ie. My test methodology allows me full access to all of the debugging tools and methods available. It doesn't force-fit me into a single one-size-fits-all methodology (ack/nack), whilst stealing my input and output, and denying me all the possibilities that entails.

      My way is infinitely better.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        So now you've got to invent names for all your tests. Just so you can search for that name to find the test?

        That is asinine make-work.

        If I use die. It automatically "names" the test, with the file and line number.

        As I have already said, Test::Simple/Test::More, etc give you the file and line number out of the box, without needing to name the test.

        But consider:

        die "reason"; # versus just die;

        If you ever provide an argument for die, you've just provided a name for a test. Is that "asinine make-work"?

        Besides which, if the line in question is in a loop, a file name and line number might not be enough - a name can be very useful to figure out what's gone wrong.

        { package Maths; sub factorial { my $n = int(pop); return $n if $n<2; $n * factorial +($n - 1) } } use Test::More; my @expected = qw/ 0 1 2 6 24 100 720 /; plan tests => scalar @expected; is(Maths::factorial($_), $expected[$_], "Factorial of $_") for 0 .. $# +expected;

        A failure on line 8 doesn't give you a clue what test has failed. A failure on line 8 named "Factorial of 5" does.

        $ perl factorial.t
        1..7
        ok 1 - Factorial of 0
        ok 2 - Factorial of 1
        ok 3 - Factorial of 2
        ok 4 - Factorial of 3
        ok 5 - Factorial of 4
        not ok 6 - Factorial of 5
        #   Failed test 'Factorial of 5'
        #   at factorial.t line 8.
        #          got: '120'
        #     expected: '100'
        ok 7 - Factorial of 6
        # Looks like you failed 1 test of 7.
        

        In this case, looking at the output, it's clear where the failure is, and checking the factorial of 5 on a calculator, it's the expected result which is in fact incorrect - my ultra-useful Maths package appears to be bug-free. Though an improvement might be to die if called with a negative number.

        { package Maths; sub factorial { my $n = int(shift); die "does not compute" if $n<0; return $n if $n<2; $n * factorial($n - 1); } } use Test::More; use Test::Exception; my @results = qw/ 0 1 2 6 24 120 720 /; for (0 .. $#results) { lives_and { is Maths::factorial($_), $results[$_] } "Factorial of +$_"; } dies_ok { Maths::factorial(-2) } "Factorial of negative number"; done_testing;
        ok 1 - Factorial of 0
        ok 2 - Factorial of 1
        ok 3 - Factorial of 2
        ok 4 - Factorial of 3
        ok 5 - Factorial of 4
        ok 6 - Factorial of 5
        ok 7 - Factorial of 6
        ok 8 - Factorial of negative number
        1..8
        
        So now you've got to invent names for all your tests. Just so you can search for that name to find the test?

        No; read the documentation, or at least the output tobyink posted.

        ... which could be Autoloaded...
        And if I use Carp::cluck or Carp::confess...
        And if I need to pause the test at some point -- for example, so that I can attach a (C-level) debugger -- I can just stick a <STDIN> in there.

        All of those things are possible with Test::Builder and friends too, without you having to edit your test files when you want to debug them. Sorry, "if" you want to debug them.

        It doesn't force-fit me into a single one-size-fits-all methodology...

        Repeating that ad nauseum doesn't make it true.

Re^6: Developing a module, how do you do it ?
by mascip (Pilgrim) on Mar 03, 2012 at 01:57 UTC

    Thank you, this feels very interesting !

    I kind of wish someone will join in, and say why they like and don't like '.t' files and coverage tools, to make it more "spicy" and understand why so many people use these.
    It could just be a fashion, as you said, but i guess if people keep on using them it's because they like something in them. Or just because they got used to them, and keep on hearing so many people say how wonderful it is, and how it's going to get even better.

    Like i said before, i am myself just very much starting to use testing : 4 months ago i didn't even know it existed, and since then i have mostly read (very positive comments) about it in several books, and not used it.
    I am going to start now, and am trying to choose how to start. Your thoughts are like honey to me : thanks for this precious food!

    Not that i think you are totally right : i will need to make my own opinion by experimenting...and my own way of developing.
    But that you challenge many things i have learned, propose alternative paths, and make me think about how i want to relate to testing :
    - what it will be for me ?
    - should testing lead my development, or should my development use testing as a tool among others.
    - testing coverage or not testing coverage? I guess, a little pinch of it can maybe help to not forget huge untested areas behind... I will have to try and feel what it does.
    - testing or not testing ?
    errrrrr..... not that one (^c^) - spending hours imagining and creating many "edge cases" tests to feel safer, or creating them "on the go" when i feel i need to test something and/or debug ?

    I'll get my own sensibility by trying, and by listening to people sharing theirs.

    I would like to take my message and yours, and share them in "Perl meditations" or "Seekers of wisdom" again, to collect more impressions.
    As the subject has shifted from developing in general, to testing alone. And now that this message "has been posted a 'long' time ago", less people might see it.
    But i am also an inexperienced Perlmonk, so my ideas are naive (don't take into account the context a lot, as i don't know the context very well).

    What do you think ?

      I kind of wish someone will join in, and say why they like and don't like '.t' files and coverage tools, to make it more "spicy" and understand why so many people use these.

      I wish that too.

      You may read any of many reasons for the lack of such responses here. Here are a couple of possibilities:

      • I could be such a kook, that I'm not worth arguing with.
      • I could be so good at arguing my case, no one is prepared to subject their prejudices, dogmas and half-baked reasoning to the cold, hard logic of my scrutiny.

      You'll have to arrive at your own judgement on that.

      What do you think ?

      I think that you could pose a new question here, something like: "Do you use separate .t files for your tests? If so, why?". If you don't mention this thread or my handle, you might get more responses. I'd stay out of the thread at least until you invited my reaction there.

      In the end, you'd have to try it both ways and live with the packages through a few maintenance cycles -- preferably carefully logging your experiences -- to reach some kind of definitive conclusions.

      Even then they would be your conclusions and other would interpret the results differently. I've often see practices adopted by people because they are the done thing; or the latest greatest fad; that then become entrenched habits they will defend without needing rationality.

      Indeed, I've done it myself in the past. It took a particular project where my way of working was closely monitored and questioned in fine detail by a third party -- it was used to form the basis of a set of working practices and guidelines for a whole huge project -- to make me question some of them in detail.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        I'll speak up and say thank you for offering an alternative to the generally accepted push toward heavy use of git and testing. That 14-step process (and that's brief?) sounds like it would work great for collaborative projects, but for a one-man operation like mine, I look at it and wonder when I'd get any coding done.

        On the other hand, I would like to use more testing and version control. My programming background is of the "hack it together until it works" variety, so my steps tend to look like this:

        • 10 Edit script
        • 20 Run script
        • 30 If errors, goto 10
        • 40 Publish script, send invoice

        (If I used git on most of my projects, I have a feeling they would get checked in once, like this: `git commit -a -m 'done'`) Most of the time, that works fine. If there's an error, it's probably near the lines I was editing, so I don't need an IDE to take me to the location of the error. If I'm using Emacs and running the script in an xterm (or web browser), my editor is still where I was; if I'm using vi, it takes me back to my previous location when I open the file again. Perl's error messages are clear enough to get me to the exact spot from there.

        Some version control would be nice. In 15 years of programming, there have only been a handful of times that I wanted to go back farther than the current editing session (which my editor could 'undo'). But on those occasions, it would have been very handy, so if I could automate that, it'd be nice. I probably could with Emacs, and I know I could with a simple script that would watch my 'work' directory and check anything that changed into git/svn/cvs/rcs, but I guess I haven't wanted that enough to bother yet.

        Another appealing aspect of the 14-step process is that it provides a certain amount of a paper trail and documentation. Quick hack scripts and troubleshooting sessions generally aren't heavy on documentation, if they have any at all, so it'd be nice to have some kind of running commentary to check back on later. It would also supply a timeline of time spent on the task. But again, if I were inclined to write better documentation and keep tighter timelines, I could already do so by adding it right in the script or a separate doc file.

        So I like your idea of testing and version control as an automated background process; that would make me a better programmer without annoying me into avoiding it. I think much of that can be done with Emacs, but I haven't studied it enough to know if it can be made unobtrusive enough.

        Aaron B.
        My Woefully Neglected Blog, where I occasionally mention Perl.