Isn't the whole point of the tests to check that the module does what it is supposed to in real situations?

To some extent, the "large image" tests that you are suggesting have nothing to do with the behavior of the specifics of your module, because all the logic of your module can be tested with a 3x1 and a 1x3 image. Whether GD::Image::copyResampled works under every situation is outside of your control -- but it is to be expected that libgd and/or the GD::Image wrapper around that library together have sufficient tests on copyResampled to ensure that it works in any situation they claim it will work.

The copyResampled docs say, "using a weighted average of the pixels of the source area rather than selecting one representative pixel" -- what happens if they decided to slightly tweak the weights used between one version of the library and another, because it gives similar but slightly better results from the visual perspective on certain images? Your test would be looking for the specific results based on factors outside of your control, so two different machines with different versions of libgd might give a different signature, even though the image is still reasonably resized and resampled.

Because the underlying libgd library can be changed without changing the GD wrapper distribution, or the GD wrapper distribution could be changed without changing libgd, you cannot (even if it is possible) just restrict your module to require an exact version of GD -- because the version of GD doesn't guarantee a specific version of the library. So you have no way of restricting to a specific libgd version, and thus no way to guarantee that the underlying behavior of that function will always deterministically give a single known output for a given input, across all versions of the library that might be on a user's machine now or in the future.

The two three-pixel images that I suggested in my github comment seem to me to be immune to such differences, because if you specify the squares so they land on exact pixel boundaries (which is what I did), there should never be a need for averaging/interpolating, and so as far as I can tell, they would not be affected by any reasonable changes to the copyResampled algorithm -- though I might be proven wrong at some point if they got really "creative" with their implementation.

If you are really worried about large images behaving weirdly, I can think of a few options:

  1. Just have one or more large images on your development PC, that you use to test, but don't have them in the repo and don't distribute them when you release (so neither GH Actions nor cpantesters would get them); that way, you can see that they, in general, work, but rely on the test suite of libgd/GD to provide confidence that if it works locally, it will work with reasonable results (though possibly not exactly-equivalent results) on other platforms and/or other versions
  2. Have the two large images in your github repo, and with a variable similar to RELEASE_TESTING, but have it be CONFIDENCE_TESTING or some such to turn on/off that big-image testing. That env-var could be true on your PC or in GH Actions, but not true for smoketesters or the average user; and using MANIFEST.SKIP, you could prevent the large images from being in the distro's tarball, so that it doesn't send huge files to smoketesters or actual users.
  3. If even GH Actions coverage isn't enough to make you confident, you could use GD::Image to generate two large-dimension images -- and since your library even includes the ability to run Image::Square->new($gd) , you don't even need the ability to write to a temporary file like I was originally thinking (plus it gives you coverage for new-from-GD instead of only new-from-file, which is a free bonus from doing it that way). For example, if you made a grid of 144 different-colored 120x120 squares in a 16x9 or 9x16 pattern (to get your vertical and horizontal aspect ratios), you could then pick a few different squares that could still be deterministic: by picking the correct ->square(1080/$n,$pos) , you could pick downscaling-factor $n and offset-factor $pos in such a way that the down-sampled squares have a portion in their middle that should be consistently the right color. It might take some experimentation, but I think you could craft one that would be enough to verify it's working with a large image, hopefully without running into problems with variations in the algorithm.
(I was originally leaning towards #1 or at most #2, but as I started writing the description #3, I realized that if it were my project, that's the direction I'd go.)


edit: finished a dangling sentence, and rephrased slightly.

In reply to Re^5: Testing image output by pryrt
in thread Testing image output by Bod

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.