kernelpanix has asked for the wisdom of the Perl Monks concerning the following question:

Im currently trying out Devel::Cover and so far I was able to successfully generate the statistics detailing test coverage. What I want to do is determine the percent of coverage and fail the project (in Jenkins) if it doesn't meet the minimum percentage required. Any ideas how to accomplish this?

Replies are listed 'Best First'.
Re: Devel::Cover as part of automated test
by moritz (Cardinal) on Mar 26, 2012 at 08:45 UTC
      Thanks for the comments guys. I believe Devel::Cover::DB is the medicine that I need however can't seem to make it work. I even tried to copy and paste the code indicated in the manual but to no avail. By any chance, would you know of any URL that discusses this module in detail?
Re: Devel::Cover as part of automated test
by chrestomanci (Priest) on Mar 26, 2012 at 08:29 UTC
    $ perl -MDevel::Cover your_script.pl $ cover Reading database from /home/yourusername/cover_db ---------------------------- ------ ------ ------ ------ ------ ------ + ------ File stmt bran cond sub pod time + total ---------------------------- ------ ------ ------ ------ ------ ------ + ------ ...DB/Schema/Result/Files.pm 100.0 n/a n/a 100.0 n/a 0.0 + 100.0 ... Writing HTML output to /home/yourusername/coverage.html ... done.

    It looks like the information you need is in the plain text table you get when you run cover If you need the full pathnames of each file, then you can get them by parsing cover_db/coverage.html, so you basically need to write a perl script to parse the raw report from cover, and feed it into Hudson. I doubt that would be to difficult.

    However, there is a much more difficult and thorny problem about what criteria you use to fail the build based on test coverage. Even if you are working on an entirely new project that has used test driven development from the start you cannot expect 100% coverage on every branch, as there should be asserts and other code that cannot be reached from a test suite. For that reason I suggest you only pay attention to the statement and subroutine coverage, not the branch or condition coverage.

    If you are retrofitting test coverage to an existing project (that has few tests), then demanding high test coverage will be impossible. The best you will be able to enforce is to require that each time a source file is modified, in your CVS (*), tests need to be added that increase the coverage of that file, even if the increase is only 0.1%. For that you would need to keep a database that stores the coverage percentage of each file in your CVS.

    Even this is problematic because if a developer decides to re-factor a tested but untidy 200 line method, with a much more elegant 30 line version, then the overall coverage percentage of the file will go down, but you don't want to fail their build and discourage such code tidy ups.

    You may also find that developers play the system and increase the coverage percentage by removing untested code, but when you think about it, that is no bad thing, as they are removing dead code, agile methodologies recommend keeping code concise and having the simplest code that can possibly work. (i.e. passes the tests).

    (*) For CVS read the name of whichever source code DB you are actually using.

      Thanks for the tip. It was not actually my intention to hit the 100% coverage mark but rather ensure that we do not neglect unit tests. Im thinking of starting with a low figure - say 40% coverage and make sure we dont get lazy with unit test writing (especially when new methods and/or conditions are introduced)