in reply to Unexpected behaviour with PERL5OPT

https://perldoc.perl.org/perlrun#ORDER-OF-APPLICATION covers this explicitly and currently states:

After normal processing of -M switches from the command line, all the -M switches in PERL5OPT are extracted. They are processed from left to right, i.e. the same as those on the command line.

So, the answer is no unless you explicitly insert it by running perl $PERL5OPT -MSYPH2 -e "" or similar.


🦛

Replies are listed 'Best First'.
Re^2: Unexpected behaviour with PERL5OPT
by syphilis (Archbishop) on Oct 24, 2024 at 14:33 UTC
    So, the answer is no ...

    Yeah ... I was looking for a way of ensuring that whenever perl was run, then a specific module would be loaded before all other modules.
    I was hoping that loading that module at the beginning of perl5opt would achieve that end, but now I see that alone is not guaranteed to work.

    Actually, I was just wanting Test/Builder.pm to load Math/Ryu.pm (if Math::Ryu was installed).
    The usual way to do that would be to have Builder.pm do eval{require Math::Ryu} but that causes t/Legacy/dont_overwrite_die_handler.t to fail a test (in the Test::Simple test suite) if Math::Ryu was not found.
    Seems a funny thing to be causing a test to fail - but if they're testing for it, then one assumes it's important. (Shrug.)

    So the next approach was to have Builder.pm check whether "Math/Ryu.pm" was in %INC ... and that's where it became important that Math::Ryu (if available) was loaded before Test::Builder.
    Maybe that can be done simply enough ... but there's a few "what-ifs" to process when thinking through that.

    And it's just an exercise that's not really going to lead anywhere.
    Test::More can throw out some really stupid and irritating diagnostics like:
    >perl -MTest::More -le "cmp_ok(0.1 ** 2, '==', 0.01, 'T1'); done_testi +ng();" not ok 1 - T1 # Failed test 'T1' # at -e line 1. # got: 0.01 # expected: 0.01 1..1 # Looks like you failed 1 test of 1.
    So I've patched Test/Builder.pm to be (eg) capable of providing:
    >perl -MMath::Ryu -MTest::More -le "cmp_ok(0.1 ** 2, '==', 0.01, 'T1') +; done_testing();" not ok 1 - T1 # Failed test 'T1' # at -e line 1. # got: 0.010000000000000002 # expected: 0.01 1..1 # Looks like you failed 1 test of 1.
    I'll submit a PR when (if) I get the detail of involving Math::Ryu reliably sorted out.
    But this irritating Test::More behaviour crops up only rarely, and no-one cares about it, anyway.

    hippo, thanks for digging up the documentation.

    Cheers,
    Rob
      If you compare floats in a test, you should compare for "close enough", not ==. See float in Test2::V0​.

      map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
        If you compare floats in a test, you should compare for "close enough", not ==.

        If that's universally true, then Test::More should throw an exception whenever 2 NVs are compared for equivalence.
        Your assertion is not even generally true - though there are times when it may be deemed more appropriate to compare approximations.
        Anyway, I'm nearly a grown-up now, so I'll be the one who decides what and how I should and shouldn't compare.

        I've no objection that the following test fails:
        >perl -MPOSIX -MTest::More -le "cmp_ok(1.4/10, '==', 0.14, 'division') +; done_testing();" not ok 1 - division # Failed test 'division' # at -e line 1. # got: 0.14 # expected: 0.14 1..1 # Looks like you failed 1 test of 1.
        In fact it should fail and I expect it to fail.
        But I object to being told that the 2 values are not equivalent because they're identical. WTF??
        That is never going to be helpful information.

        With my patched Builder.pm (and Math::Ryu installed), the above one-liner produces:
        >perl -MPOSIX -MTest::More -le "cmp_ok(1.4/10, '==', 0.14, 'division') +; done_testing();" not ok 1 - division # Failed test 'division' # at -e line 1. # got: 0.13999999999999999 # expected: 0.14 1..1 # Looks like you failed 1 test of 1.
        which actually tells us what the 2 values were.
        It's a pretty rare scenario. The only recent example I have that turned up in the wild is from https://github.com/Perl/perl5/issues/22463:
        not ok 40 - tan(1) == -tan(-1) # Failed test 'tan(1) == -tan(-1)' # at ext/POSIX/t/math.t line 52. # got: 1.55740772465490223050697480745836 # expected: 1.55740772465490223050697480745836
        Usual practice seems to be that "-tan(-1)" is mostly calculated as "tan(1)", and that would mean that the test should pass by tautology (as it usually does).
        It was surmised that a different process, which would allow for a small discrepancy, was being followed in this particular failure.
        Again, my objection is not that it failed, but that it's claimed that the 2 values were deemed non-equivalent yet identical.
        If the "surmising" was correct, then my patched approach would have presented:
        not ok 40 - tan(1) == -tan(-1) # Failed test 'tan(1) == -tan(-1)' # at ext/POSIX/t/math.t line 52. # got: 1.55740772465490223050697480745836023 # expected: 1.55740772465490223050697480745836081
        I'll post again to this thread with a link to the Test::Simple PR once I've submitted it.

        Cheers,
        Rob