in reply to Testing: shades of grey

here another approach:

repeating the tests according to their importance will create a metric at the end

use Test::More; sub weight (&$$) { my ($c_test, $name, $weight) = @_; my $res = subtest $name, $c_test; ok($res,$name) for 2.. $weight; } weight { is "a","a" } "PASSING" => 3; weight { is "a","X" } "FAILING" => 2; done_testing;

# Subtest: PASSING ok 1 1..1 ok 1 - PASSING ok 2 - PASSING ok 3 - PASSING # Subtest: FAILING not ok 1 # Failed test at /home/lanx/perl/pm/test_metric.pl line 16. # got: 'a' # expected: 'X' 1..1 # Looks like you failed 1 test of 1. not ok 4 - FAILING # Failed test 'FAILING' # at /home/lanx/perl/pm/test_metric.pl line 6. not ok 5 - FAILING # Failed test 'FAILING' # at /home/lanx/perl/pm/test_metric.pl line 8. 1..5 # Looks like you failed 2 tests of 5. <--- Metric

As you see, you are quite flexible with a bit of functional programming.

Another approach might be using Test::Builder to create your own semantics.

update
For instance you can use $Test->no_diag(0); to make the repeated fails less verbose.

update

or you can use my @tests = $Test->details; to get a detailed overview which tests passed and if they were todos.

# test_metric.pl:33: [ # { actual_ok => 1, name => "PASSING", ok => 1, reason => "", type = +> "" }, # { actual_ok => 1, name => "PASSING", ok => 1, reason => "", type = +> "todo" }, # { actual_ok => 1, name => "PASSING", ok => 1, reason => "", type = +> "todo" }, # { actual_ok => 0, name => "FAILING", ok => 0, reason => "", type = +> "" }, # { actual_ok => 0, name => "FAILING", ok => 1, reason => "", type = +> "todo" }, # ]

after putting the weight into the name, you can easily calculate your desired metric.

NB: ->details also works locally for subtests only.

Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery

Replies are listed 'Best First'.
Re^2: Testing: shades of grey
by LanX (Saint) on Dec 24, 2024 at 16:36 UTC
    Finally you can also use TAP::Parser to scan the output.

    So if I were you, I'd use the demonstrated abstraction in combination with Test::Builder to write your own appropriate test-semantic and put the "weight" into the comments.

    I would than parse the verbose output and calculate the desired "metric" from the weights, with TAP::Parser.

    I wouldn't mess with the classic test result output, cause it's a different "all or nothing" metric apart from the "better" metric.

    So better put non-crucial tests into TODO blocks, because they are not kill criteria.

    Regarding testing things with various level of "ok-ness", I'd write a test which tests them from best to worst and bails out as soon as it passes -i.e. reporting OK-and logging the weight in the comments.

    Like if-elsif-... chains or in case of simple strings hash look-ups with the weight as values.

    A routine like

    isinchain( got, [ [exp1,weight1,name1], etc... ] , name) could do the abstraction

    That's all very abstract because you're description was very abstract too.

    Hope this helps. :)

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    see Wikisyntax for the Monastery