Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Organising Large Test Suites

by eyepopslikeamosquito (Archbishop)
on Jun 06, 2004 at 22:36 UTC ( [id://361821]=perlmeditation: print w/replies, xml ) Need Help??

Further to What is the best way to add tests to existing code?, I need to organise a huge and growing test suite. In particular, I'm eager to learn good ways to map unit test programs to external sources -- such as bug-IDs, test-case IDs, test plans, and so forth.

For example, given a specific bug-ID, could you tell me: a) if it is being tested by the regression test suite; and b) which bit of unit test code actually tests that bug. Ditto for a specific test-case-ID/test plan.

Curiously, for the Perl core, given a particular RT ticket #, there is currently no easy way to tell which of Perl's 80,000-odd tests test for it. Or even if it's regression tested at all. (If you're lucky, you might find a reference to the RT ticket # in code comments and/or change logs).

There's been a lot of interest in code coverage, and Paul Johnson has done a wonderful job with Devel::Cover. What about bug coverage? That is, what percentage of your bug database is covered by your test suite? Which, btw, might make a nice kwalitee measure of CPAN modules.

You might partition your test suite into two pieces: one to test bugs raised in the field; the other to grow in-step with the code as you develop it (i.e. test-driven development). My current plan is simply to insert bug-IDs/test-case IDs in test program comments and Configuration Management change logs. How do you do it?

Replies are listed 'Best First'.
Re: Organising Large Test Suites
by adrianh (Chancellor) on Jun 06, 2004 at 23:05 UTC
    You might partition your test suite into two pieces: one to test bugs raised in the field; the other to grow in-step with the code as you develop it (i.e. test-driven development).

    I try not to do this for a couple of reasons:

    • The failing bug test is often illustrating something useful about holes in the existing test suite. If it's off it a separate area it is harder to get a good handle on the unit being tested since the tests are spread out over different locations.
    • There is a tendency to sideline the bug-report tests if they're not causing your main build to fail.

    If it's not something that can be fixed immediately I might make it a $TODO test so that the test suite runs successfully, but there's always that reminder that there is technical debt that needs to be paid off.

    My current plan is simply to insert bug-IDs/test-case IDs in test program comments and Configuration Management change logs. How do you do it?

    I tend to use the SCM logs for this, and I've also been experimenting with subversion's properties (for those who don't use subversion properties are versioned metadata you can associate with files/directories).

    IMHO it belongs in the SCM not the tests themselves since it's basically a comment and, as test suites and code gets refactored. the comment gets more and more out of sync with reality. You want to track when the failing test(s) for a particular bug were created and when they were fixed, which is a job that source control does well. I'm not really certain that there is much utility in tracking stuff after that.

Re: Organising Large Test Suites
by Zaxo (Archbishop) on Jun 06, 2004 at 23:31 UTC

    Test::More's ways of saying ok(), is(), isnt(), cmp_ok(), like(), unlike(), and is_deeply() include the test name as a last argument. See also the diag() function. Test::Simple's ok() does the same. Just put the RT ticket number there with a description.

    You'll get the test name in the test output, along with some diagnostics in # comments for all but ok().

    After Compline,
    Zaxo

Re: Organising Large Test Suites
by stvn (Monsignor) on Jun 07, 2004 at 00:39 UTC

    I would have to agree with adrianh that the test suite is maybe not the place to do this. His suggestion about using the SCM comments is a good one, but I would take that a step further as well, and enter the information about the bug, how it was fixed, and where it is tested in some RT-like bug-tracking application.

    ... given a specific bug-ID, could you tell me: a) if it is being tested by the regression test suite; and b) which bit of unit test code actually tests that bug.
    All this could then be accomplished by just viewing the comments about the bug. And since what you are asking to look at is really best viewed from the POV of the bug, the bug-tracking application seems to me the logical place for it.

    As for how to accomplish this with your test-cases and test-plans, I would reccommend the same approach (assuming you have an application or document that keeps track of these things, and you can add information to said document).

    Another thought might be to place specific bug-fix test code into seperate files, which are then included with do or some kind of pre-processing into the larger test suite. This would allow you to still retain the larger structure of the tests, while being able to keep the bug-fix tests in their own seperate files. This would allow you to reference that single file in your bug-tracking application, and put more in depth comments in the file. Of course you still need to worry about the tendency of comments and code to fall out of sync, but being that the whole file will be specific to this one bug-fix, it is possible this may not happen (at least not as easily).

    -stvn
Re: Organising Large Test Suites
by dragonchild (Archbishop) on Jun 07, 2004 at 01:58 UTC
    Personally, I would have a way of uniquely identifying each test in your test suite. Then, you can specify in the RT report which test it is. The key is the unique indentifier for each test in the suite. Now, you don't have any difference between "bug tests" and "new dev tests". Frankly, in TDD, you fix bugs the way you write new code - with a test first. A bug report is a requirement. Instead of coming from the business analyst, it comes from the user. Either way, it's still a requirement.

    ------
    We are the carpenters and bricklayers of the Information Age.

    Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

    I shouldn't have to say this, but any code, unless otherwise stated, is untested

      Personally, I would have a way of uniquely identifying each test in your test suite.
      I think it's better to assign the unique id to the bug/test-case because they tend to be more stable than the test suite (especially when you start refactoring).
        If you give each test a unique identifer, you can also track which tests handle which requirements. I know this would be useful for other reasons.

        The ID should be something that is on the test-level without reference to which file or subsystem it deals with. Maybe, an order of creation? Then, the test suite is actually just a list of the individual test IDs that should be run?

        ------
        We are the carpenters and bricklayers of the Information Age.

        Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

        I shouldn't have to say this, but any code, unless otherwise stated, is untested

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://361821]
Approved by Happy-the-monk
Front-paged by stvn
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (7)
As of 2024-03-28 22:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found