Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Test::More fails...

by Bod (Parson)
on May 29, 2023 at 23:25 UTC ( [id://11152486]=perlquestion: print w/replies, xml ) Need Help??

Bod has asked for the wisdom of the Perl Monks concerning the following question:

My experience of tests is very limited so the experience of The Monestry is greatly appreciated

I am using Test::More and I have two issues.

Firstly...

#!perl use 5.006; use strict; use warnings; use Test::More; plan tests => 1; BEGIN { use_ok( 'AI::Embedding' ) || print "Bail out!\n"; } diag( "Testing AI::Embedding $AI::Embedding::VERSION, Perl $], $^X" );
This fails at plan tests => 1; despite this being added by Module::Starter and it appearing correct according to the documentation. Instead, I have to write use Test::More tests => 1;. What am I doing wrong?

Second issue...

my $comp_pass1 = $embed_pass->compare('-0.6,-0.5,-0.4,-0.3,-0.2,0.0,0. +2,0.3,0.4,0.5', '-0.6,-0.5,-0.4,-0.3,-0.2,0.0,0.2,0.3,0.4,0.5'); ok( $comp_pass1 == 1, "Compare got $comp_pass1"); $embed_pass->comparator('-0.6,-0.5,-0.4,-0.3,-0.2,0.0,0.2,0.3,0.4,0.5' +); my $comp_pass2 = $embed_pass->compare('-0.6,-0.5,-0.4,-0.3,-0.2,0.0,0. +2,0.3,0.4,0.5'); ok( $comp_pass2 == 1, "Compare to comparator got $comp_pass2");
This code is failing despite $comp_pass1 and $comp_pass2 being 1. Again, what am I doing wrong?

This is the output from gmake test

"C:\Strawberry\perl\bin\perl.exe" "-MExtUtils::Command::MM" "-MTest::H +arness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib\l +ib', 'blib\arch')" t/*.t t/00-load.t ....... 1/1 # Testing AI::Embedding 0.1_1, Perl 5.032001, +C:\Strawberry\perl\bin\perl.exe t/00-load.t ....... ok t/01-openai.t ..... 1/11 # Failed test 'Compare to comparator got 1' # at t/01-openai.t line 44. # Looks like you failed 1 test of 11. t/01-openai.t ..... Dubious, test returned 1 (wstat 256, 0x100) Failed 1/11 subtests t/manifest.t ...... skipped: Author tests not required for installatio +n t/pod-coverage.t .. skipped: Author tests not required for installatio +n t/pod.t ........... skipped: Author tests not required for installatio +n Test Summary Report ------------------- t/01-openai.t (Wstat: 256 Tests: 11 Failed: 1) Failed test: 11 Non-zero exit status: 1 Files=5, Tests=12, 0 wallclock secs ( 0.05 usr + 0.02 sys = 0.06 CP +U) Result: FAIL Failed 1/5 test programs. 1/12 subtests failed. gmake: *** [Makefile:859: test_dynamic] Error 255

Replies are listed 'Best First'.
Re: Test::More fails...
by hv (Prior) on May 29, 2023 at 23:46 UTC

    Here's my guess for your first issue: plan tests => 1; is a standard Perl statement, it gets executed at run-time - so it has not yet been executed when you hit the BEGIN block and launch in to your first test.

    You could work around this by also putting the plan statement in a BEGIN block. However use Module (foo); is equivalent to BEGIN { require Module; Module->import(foo) }. So its action occurs at BEGIN time, just as required for this case.

    I don't have an immediate guess for the second issue. If I were debugging this, my next steps would be a) to run the test program in isolation for more verbose output; then b) to use Devel::Peek to Dump($comp_pass2) directly after setting it - I can think of several types of value that would stringify as "1" but give a false result from $comp_pass2 == 1.

    However your expectation of the function also seems odd: normally I would expect a comparator to return similar values to $a cmp $b or $a <=> $b, in particular returning 0 when the two values are equal. With your lengthy test strings it is hard to tell by eye, but I think they are the same.

      Here's my guess for your first issue.....it gets executed at run-time - so it has not yet been executed when you hit the BEGIN block and launch in to your first test

      That seems sensible. It also seems odd because I didn't write that code - it is part of the boilerplate produced by Module::Starter

      I don't have an immediate guess for the second issue

      At least I am not missing something obvious :)

      $comp_pass2 should contain a floating point between -1 and +1. If the two stringified vectors are the same (like they are in the test), then the result should be +1. The compare method returns the Cosine Similarity of the two vectors derived from the test strings

Re: Test::More fails...
by choroba (Cardinal) on May 30, 2023 at 08:20 UTC
    If you want to compare that two values are the same, use is, not ok, you'll get nicer diagnostics.
    is $comp_pass1, 1, 'compare 1';

    Regarding use_ok, I prefer not to use it at all, or test it in a test of its own.

    Also note that you can also specify

    done_testing(1);
    instead of planning.

    map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

      Thank you.

      I already have use_ok in a test of its own so I don't need it here and will remove it.

      Off to look at is now :)

      UPDATE...it works...
      Thanks choroba - it now works :)

      I changed:
      ok( $comp_pass1 == 1, 'compare 1' ); to is $comp_pass1, 1, 'compare 1';
      use_ok to a simple use AI::Embedding; and
      done_testing(10); and all(!) tests now pass :)

Re: Test::More fails...
by kcott (Archbishop) on May 30, 2023 at 09:57 UTC

    G'day Bod,

    Excellent analysis by ++hv. Here's some additional tips.

    Here's a simple test script (pm_11152486_test_more.t):

    #!perl use strict; use warnings; use Test::More tests => 3; my ($var1, $var2) = (1, 2); # Test1 ok($var1 == $var2, 'Fail: ok()'); # Test2 is($var1, $var2, 'Fail: is()'); # Test3 ok($var2 == $var1, 'Fail: ok()') or diag "\$var1[$var1] \$var2[$var2]";
    • Test1: ok() just tells you whether the condition was true or false. The condition can include a wide range of expressions.
    • Test2: is() tells you whether the condition was true or false, and the values of the arguments, but the condition is limited to an eq comparison (isnt() uses a ne comparison).
    • Test3: Get the best of both worlds: use ok() for any conditional expression; and diag() to get the value of the arguments.

    In simple cases, you can interpolate the values of the arguments into the test name; e.g.

    ok($var1 == $var2, "Fail: ok($var1 == $var2)");

    in which case, diag() is unnecessary.

    In more complex cases, doing this is not easy, or can result in cumbersome test names; or you may want additional information such as values from %ENV. This is where diag() can be particularly useful.

    Here's a straightforward run of that code:

    $ perl pm_11152486_test_more.t 1..3 not ok 1 - Fail: ok() # Failed test 'Fail: ok()' # at pm_11152486_test_more.t line 11. not ok 2 - Fail: is() # Failed test 'Fail: is()' # at pm_11152486_test_more.t line 14. # got: '1' # expected: '2' not ok 3 - Fail: ok() # Failed test 'Fail: ok()' # at pm_11152486_test_more.t line 17. # $var1[1] $var2[2] # Looks like you failed 3 tests of 3.

    Consider the prove utility for additional output information:

    $ prove pm_11152486_test_more.t pm_11152486_test_more.t .. 1/3 # Failed test 'Fail: ok()' # at pm_11152486_test_more.t line 11. # Failed test 'Fail: is()' # at pm_11152486_test_more.t line 14. # got: '1' # expected: '2' # Failed test 'Fail: ok()' # at pm_11152486_test_more.t line 17. # $var1[1] $var2[2] # Looks like you failed 3 tests of 3. pm_11152486_test_more.t .. Dubious, test returned 3 (wstat 768, 0x300) Failed 3/3 subtests Test Summary Report ------------------- pm_11152486_test_more.t (Wstat: 768 (exited 3) Tests: 3 Failed: 3) Failed tests: 1-3 Non-zero exit status: 3 Files=1, Tests=3, 0 wallclock secs ( 0.05 usr 0.02 sys + 0.03 cusr + 0.09 csys = 0.19 CPU) Result: FAIL

    Use the -v option for verbose output:

    $ prove -v pm_11152486_test_more.t pm_11152486_test_more.t .. 1..3 not ok 1 - Fail: ok() not ok 2 - Fail: is() not ok 3 - Fail: ok() # Failed test 'Fail: ok()' # at pm_11152486_test_more.t line 11. # Failed test 'Fail: is()' # at pm_11152486_test_more.t line 14. # got: '1' # expected: '2' # Failed test 'Fail: ok()' # at pm_11152486_test_more.t line 17. # $var1[1] $var2[2] # Looks like you failed 3 tests of 3. Dubious, test returned 3 (wstat 768, 0x300) Failed 3/3 subtests Test Summary Report ------------------- pm_11152486_test_more.t (Wstat: 768 (exited 3) Tests: 3 Failed: 3) Failed tests: 1-3 Non-zero exit status: 3 Files=1, Tests=3, 0 wallclock secs ( 0.00 usr 0.09 sys + 0.03 cusr + 0.08 csys = 0.20 CPU) Result: FAIL

    In situations where make test (or other variants of make) has shown some test scripts to have failed; rerun the failed tests individually with as much output as possible. From the same directory that you ran make test, run

    prove -vb t/failed_test.t

    In your example, 00-load.t was OK but 01-openai.t was not. So run:

    prove -vb t/01-openai.t

    You can run prove with multiple files; I generally don't — normally the output from one file is enough to deal with at once. :-)

    — Ken

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://11152486]
Approved by GrandFather
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (3)
As of 2024-04-25 17:23 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found