in reply to Testing image output

Should I manually verify the image it is supposed to create, then take a hash of that image and compare the hash from the same process during testing?

Sounds good to me.

If I do it this way, I will need to use a hashing module and I'm reluctant to create a dependency that is only used in the tests.

Digest::MD5 is a core module and has been since 5.7.3. You should have no qualms about relying on that one being present. Still declare it as a test dependency just in case for the 5.6.0 hold-outs.


🦛

Replies are listed 'Best First'.
Re^2: Testing image output
by Bod (Parson) on Sep 07, 2023 at 23:17 UTC

    Thanks hippo

    I've used md5_hex from Digest::MD5 and checked it is installed as I've required Perl 5.010

    Could I please have some feedback on this test file as testing is not my strongest skill...

    #!perl use 5.010; use strict; use warnings; use Test::More; eval "use Digest::MD5 qw(md5_hex)"; plan skip_all => "Skipping tests: Digest::MD5 qw(md5) not available!" +if $@; plan tests => 11; use Image::Square; # Hashes of visually tested images my %hash = ( 'hor_square' => '370f26d5fbc52ae93b1bd0928e38cd24', 'hor_left' => '807f6263746b7646f172b7b0928d9195', 'hor_right' => '65672407acff93b188d24bc9b0003bd7', 'ver_square' => 'c9958ec55ef446c41fdecb71b2c69d09', 'ver_top' => '024a3f36e32f1ad193e761b315e7be4c', 'ver_bottom' => 'aea5e4120e8459b226ea003a0d57a2b4', ); # Test horizontal iamge my $image = Image::Square->new('t/CoventryCathedral.jpg'); ok ($image, 'Instantiation'); diag('Testing horizontal image'); my $square1 = $image->square(); my $square2 = $image->square(100); my $square3 = $image->square(150, 0); my $square4 = $image->square(150, 1); ok ($square1->width == $square1->height, 'Image is square from horizon +tal'); ok ( 100 == $square2->width && 100 == $square2->height, 'Correct resiz +e from horizontal'); ok ( md5_hex($square2->jpeg(50)) eq $hash{'hor_square'}, 'Correct cent +re image from horizontal'); ok ( md5_hex($square3->jpeg(50)) eq $hash{'hor_left'}, 'Correct left i +mage from horizontal'); ok ( md5_hex($square4->jpeg(50)) eq $hash{'hor_right'}, 'Correct right + image from horizontal'); # Test vertical iamge $image = Image::Square->new('t/decoration.jpg'); diag('Testing vertical image'); my $square5 = $image->square(); my $square6 = $image->square(100); my $square7 = $image->square(150, 0); my $square8 = $image->square(150, 1); ok ($square5->width == $square5->height, 'Image is square from vertica +l'); ok ($square6->width == 100 && $square6->height == 100, 'Correct resize + from vertical'); ok ( md5_hex($square6->jpeg(50)) eq $hash{'ver_square'}, 'Correct cent +re image from vertical'); ok ( md5_hex($square7->jpeg(50)) eq $hash{'ver_top'}, 'Correct top ima +ge from vertical'); ok ( md5_hex($square8->jpeg(50)) eq $hash{'ver_bottom'}, 'Correct bott +om image from vertical'); done_testing;

      Could I please have some feedback on this test file as testing is not my strongest skill

      I pulled a face the instant I saw all those ok functions! From Basic Testing Tutorial by hippo:

      While the ok function is useful, the output is a simple pass/fail - it doesn't say how it failed ... Let's use Test::More and its handy cmp_ok function

      To convince yourself this is a worthwhile change, try running some failing test cases with your original ok and compare with cmp_ok.

        Thanks for the bedtime reading...that might improve my abilities with Test::More 👍

        To convince yourself this is a worthwhile change, try running some failing test cases with your original ok and compare with cmp_ok.

        I would take up your suggestion eyepopslikeamosquito but the tests don't fail for me...I would need to upload another dev release to CPAN and wait for some CPAN Testers to generate some results for me. There seem to be precious few testers these days as tests are getting longer and longer before they arrive. I don't want to unnecessarily burden those that are doing this sterling work!

        To convince yourself this is a worthwhile change, try running some failing test cases with your original ok and compare with cmp_ok

        I took this advice...
        As expected, I get a more verbose output.

        But, I don't see how it is any more helpful:

        # Testing horizontal image # Failed test 'Correct centre image from horizontal' # at t/02-image.t line 41. # got: '33220702a62b38deb37ef0ce0b4a1a22' # expected: 'c97e63fc792ef75b5ff49c078046321e' # Failed test 'Correct left image from horizontal' # at t/02-image.t line 43. # got: '4a0e69414c9603b0437d22841ef0d300' # expected: '20a5c6517316ebef4c255c12f991dbc7'

      It won't work as written. Neither coders nor decoders are stable.

      Does your module (can't find it on CPAN) inherit from GD (judging by width/height/jpeg methods)? It doesn't matter in the end, I only hope you force it to treat jpegs as truecolor on open, because it doesn't despite whatever its doco says. If GD converts jpeg to palette, it will be even more mess to add to description below -- looks like GD tunes this algo (true to palette quantization) more frequently, then no need to go as far back as 5.010 to demonstrate.

      Frog is frog

      use strict; use warnings; use feature 'say'; use GD; use Digest::MD5 'md5_hex'; say $^V; say $GD::VERSION; my $f = 'frog.jpg'; GD::Image-> trueColor( 1 ); my $i = GD::Image-> new( $f ); say "Image is ", ( $i-> isTrueColor ? '' : 'not ' ), 'truecolor'; printf "RGB triplet for 0,0 pixel is: %3\$d, %2\$d, %1\$d\n", unpack 'C3', pack 'L', $i-> getPixel( 0, 0 ); __END__ v5.32.1 2.76 Image is truecolor RGB triplet for 0,0 pixel is: 0, 248, 231 v5.10.1 2.44 Image is truecolor RGB triplet for 0,0 pixel is: 0, 247, 231

      Blame ancient GD version? But

      >convert frog.jpg -format "%[pixel:u.p{0,0}]\n" info: srgb(0,247,231)

      I'd say lossy codecs are murky waters and avoid them in tests:

      >convert frog.jpg frog.png

      use strict; use warnings; use feature 'say'; use GD; use Digest::MD5 'md5_hex'; say $^V; say $GD::VERSION; my $f = 'frog.png'; GD::Image-> trueColor( 1 ); my $i = GD::Image-> new( $f ); say "Image is ", ( $i-> isTrueColor ? '' : 'not ' ), 'truecolor'; say md5_hex( $i-> png() ); __END__ v5.32.1 2.76 Image is truecolor 1b6edecaa6d0b67f7bf960113f2136c7 v5.16.3 2.49 Image is truecolor 9edf3f9e11991f2dcfd77a7e1ffbafe8

      Eh? What's the matter now? My first thought was zlib tunes its compression algo (despite same "level" 0..10), but it's not the reason for difference above -- though I strongly suspect it can influence the result for some input other than puny frog. But, above, it's just pHYs chunk that libpng decides to include from some version on.

      Then, what it all amounts to -- try stable (read: "obsolete") lossless coder with uncompressed output (GD can't dump raw pixels, unfortunately), short of enumerating pixels one by one and adding RGB to string. Hopefully it's OK now:

      use strict; use warnings; use feature 'say'; use GD; use Digest::MD5 'md5_hex'; say $^V; say $GD::VERSION; my $f = 'frog.png'; GD::Image-> trueColor( 1 ); my $i = GD::Image-> new( $f ); say "Image is ", ( $i-> isTrueColor ? '' : 'not ' ), 'truecolor'; say md5_hex( $i-> gd() ); __END__ v5.32.1 2.76 Image is truecolor e60c6afd7eefe80050d6af4488457281 v5.16.3 2.49 Image is truecolor e60c6afd7eefe80050d6af4488457281 v5.10.1 2.44 Image is truecolor e60c6afd7eefe80050d6af4488457281
        Neither coders nor decoders are stable

        Do you mean the hashing coder/decoder or the image format?

        Does your module (can't find it on CPAN) inherit from GD (judging by width/height/jpeg methods)?

        Yes, it does use GD. It doesn't inherit from it.

        You can't find it on CPAN because it is still in development release. Although the module works under all visual testing, I wanted to get it working with a complete test suite before releasing it publicly.

        Yes, the module does use GD::Image-> trueColor( 1 ); and the tests specify $new->jpeg(50) rather than leaving GD to choose the best compression (whatever that means) as the docs say it does. I stayed away from PNG as I know that the exact behaviour of that depends on how GD was built, something that will vary on target machines.

        I hadn't thought of using a GD object to perform the comparison...obvious really...thanks...

Re^2: Testing image output
by Bod (Parson) on Sep 08, 2023 at 09:49 UTC
    Sounds good to me

    It might sound like a good approach...but...it's failing under testing 😕

    But, I'm not sure what could be producing this failure other than different builds of GD producing slightly different outputs or the hashing being subtly different on Linux where it is failing to Windows where I am developing. I specified the image quality with $new->jpeg(50) to try and keep GD consistent across builds.

      So, it's JPEG. :-)

      I agree with our Anonymous friend who wrote:

      no JPGs in t folder, because same image file can't be expected to decode to same data.

      Maybe use a lossless format instead for this level of testing and then separately just confirm that using JPEGs doesn't error out? Or else see how other JPEG modules handle it in their test suites.


      🦛

      omg, ain't GD so very difficult. I'm looking at Image-Square 0.01_4 testers matrix, what was supposed to be walk in the park is like blood covered battlefield.

      Half failures are from gd native output format unsupported, who could expect. I'm sorry. It isn't really a problem, because ".gd" is just 11 bytes header plus raw data:

      use strict; use warnings; use feature 'say'; use GD; use Digest::MD5 'md5_hex'; say $^V; say $GD::VERSION; say eval { GD::VERSION_STRING() } || '-'; GD::Image-> trueColor( 1 ); my $fn = 'CoventryCathedral.png'; my $i = GD::Image-> new( $fn ); use constant W => 100; my $j = GD::Image-> new( W, W ); $j-> copyResampled( $i, 0, 0, ( $i-> width - $i-> height ) * .5, 0, W, W, $i-> height, $i-> height ); say eval { md5_hex( $j-> gd )} || '-'; say md5_hex( my_gd( $j )); sub my_gd { # same as gd() for truecolor images my $gd = shift; my ( $w, $h ) = $gd-> getBounds; my $s = ''; for my $y ( 0 .. $h - 1 ) { for my $x ( 0 .. $w - 1 ) { $s .= pack 'L>', $gd-> getPixel( $x, $y ); } } return "\xff\xfe" . ( pack 'S>2', $w, $h ) . "\1\xff\xff\xff\xff" . $s } __END__ v5.38.0 2.78 2.3.2 - c97e63fc792ef75b5ff49c078046321e v5.32.1 2.76 2.2.5 c97e63fc792ef75b5ff49c078046321e c97e63fc792ef75b5ff49c078046321e v5.24.3 2.66 2.1.1 adc191aea66fdf99fd74aaeb20b34e5e adc191aea66fdf99fd74aaeb20b34e5e

      Note, one checksum is exactly what "t/02-image.t line 41" was expecting, but the latter is what many (but not all) failures have "got".

      It appears that copyResampled (and interpolation in general, see further) is unstable between versions and plagued with bugs. Then, even generating synthetic gradient or whatever, and checking for just couple of pixels (e.g. lower left and upper right points) is NOT reliable way to test anything with GD, let alone calculating checksum over whole re-sampled image.

      No CoventryCathedral for tests below, simply a red 8 by 8 square to reduce to smaller squares:

      use strict; use warnings; use feature 'say'; use GD; say $^V; say $GD::VERSION; say eval { GD::VERSION_STRING() } || '-'; GD::Image-> trueColor( 1 ); my $i = GD::Image-> new( 8, 8 ); $i-> filledRectangle( 0, 0, 7, 7 ,$i-> colorAllocate( 255, 0, 0 )); for my $w ( 1 .. 7 ) { my $j = GD::Image-> new( $w, $w ); $j-> copyResampled( $i, 0, 0, 0, 0, $w, $w, 8, 8 ); print "\t\t\t$w\n"; for my $y ( 0 .. $w - 1 ) { for my $x ( 0 .. $w - 1 ) { my ( $r ) = $j-> rgb( $j-> getPixel( $x, $y )); printf '%x ', $r; } print "\n"; } } __END__ v5.38.0 2.78 2.3.2 1 ff 2 ff ff ff ff 3 fe fe fe fe fe fe ff fe ff 4 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 5 fe ff ff fe ff ff fe ff ff ff fe fe ff ff ff fe fe ff ff fe ff ff ff fe ff 6 fe ff ff ff ff ff ff ff ff ff ff fe ff ff ff fe ff fe ff ff ff ff ff ff ff fe ff ff ff fe ff fe fe ff fe ff 7 fe ff ff fe fe ff fe ff ff ff ff ff ff fe fe ff ff ff ff ff ff ff ff ff ff ff ff ff fe ff ff ff ff ff ff ff ff ff ff ff ff ff ff fe ff ff ff ff ff v5.24.3 2.66 2.1.1 1 ff 2 ff ff ff ff 3 ff ff ff ff ff ff ff ff ff 4 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 5 fe fe fe fe fe fe fe ff fe fe fe fe ff fe ff ff fe fe ff fe fe fe ff fe ff 6 ff ff ff ff ff ff ff fe ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff fe ff ff ff ff ff ff ff ff ff ff 7 fe fe ff fe ff fe fe fe ff fe ff ff fe ff ff fe ff ff ff fe fe fe ff ff ff fe ff ff ff ff ff fe fe ff fe fe ff ff ff ff ff ff fe ff ff ff fe ff fe

      Oh, I thought, but I'm copying red pixels to another (smaller) canvas, filled with default black. Maybe, instead, plain simple resize would preserve pure red colour? Note, plain "resize" was not implemented in old versions anyway.

      use strict; use warnings; use feature 'say'; use GD; say $^V; say $GD::VERSION; say eval { GD::VERSION_STRING() } || '-'; GD::Image-> trueColor( 1 ); my $i = GD::Image-> new( 8, 8 ); $i-> filledRectangle( 0, 0, 7, 7 ,$i-> colorAllocate( 255, 0, 0 )); for my $w ( 1 .. 7 ) { my $j = $i-> copyScaleInterpolated( $w, $w ); print "\t\t\t$w\n"; for my $y ( 0 .. $w - 1 ) { for my $x ( 0 .. $w - 1 ) { my ( $r ) = $j-> rgb( $j-> getPixel( $x, $y )); printf '%x ', $r; } print "\n"; } } __END__ v5.38.0 2.78 2.3.2 1 ff 2 ff ff ff ff 3 ff ff ff ff fd fd ff fd fd 4 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 5 ff ff ff ff ff ff fd fd fd fd ff fd fd fd fd ff fd fd fd fd ff fd fd fd fd 6 ff ff ff ff ff ff ff fd fd fd fd fd ff fd fd fd fd fd ff fd fd fd fd fd ff fd fd fd fd fd ff fd fd fd fd fd 7 ff ff ff ff ff ff ff ff fd fd fd fd fd fd ff fd fd fd fd fd fd ff fd fd fd fd fd fd ff fd fd fd ff fd fd ff fd fd fd fd fd fd ff fd fd fd fd fd fd

      Wait, but there are a few dozen interpolation methods:

      use strict; use warnings; use feature 'say'; use GD; say $^V; say $GD::VERSION; say eval { GD::VERSION_STRING() } || '-'; GD::Image-> trueColor( 1 ); my $i = GD::Image-> new( 8, 8 ); $i-> filledRectangle( 0, 0, 7, 7 ,$i-> colorAllocate( 255, 0, 0 )); my @ok_methods; for my $m ( 1 .. 30 ) { eval { for my $w ( 1 .. 7 ) { $i-> interpolationMethod( $m ); my $j = $i-> copyScaleInterpolated( $w, $w ); for my $y ( 0 .. $w - 1 ) { for my $x ( 0 .. $w - 1 ) { my ( $r ) = $j-> rgb( $j-> getPixel( $x, $y )); die unless 255 == $r; } } } 1; } or next; push @ok_methods, $m; } say 'looks like ok methods are: ', join ' ', @ok_methods; __END__ v5.38.0 2.78 2.3.2 looks like ok methods are: 1 2 6 7 8 9 10 11 12 13 14 15 16 17 18 19 2 +0

      I have no idea why 3,4,5 i.e.

      GD_BILINEAR_FIXED,
      GD_BICUBIC,
      GD_BICUBIC_FIXED,
      

      are not ok i.e. don't preserve dumb uniform fill of dumb square canvas. I'd laugh out load if asked will this list stay stable for near future. I have much sympathy for GD, but above was a little bit too much.

      use strict; use warnings; use feature 'say'; use Imager; my $i = Imager-> new( xsize => 8, ysize => 8 ); $i-> box( filled => 1, color => Imager::Color-> new( 255, 0, 0 )); for my $w ( 1 .. 7 ) { my $j = $i-> scale( xpixels => $w, # qtype => 'mixing', # qtype => 'preview', ); print "\t\t\t$w\n"; for my $y ( 0 .. $w - 1 ) { for my $x ( 0 .. $w - 1 ) { my ( $r ) = $j-> getpixel( x => $x, y => $y )-> rgba; printf '%x ', $r; } print "\n"; } } __END__ 1 ff 2 ff ff ff ff 3 ff ff ff ff ff ff ff ff ff 4 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 5 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 6 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff 7 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        omg, ain't GD so very difficult. I'm looking at Image-Square 0.01_4 testers matrix, what was supposed to be walk in the park is like blood covered battlefield.

        I know my knowledge of images is lacking but I was beginning to feel I had done something terribly wrong...

        No CoventryCathedral for tests below, simply a red 8 by 8 square to reduce to smaller squares

        Isn't the whole point of the tests to check that the module does what it is supposed to in real situations?

        Users of the module (me if I am the only one) will be using ti to process large, if not huge, images. If it passes the tests on little tiny images but fails on large ones, doesn't that sort of render the tests meaningless?

        My original tests were based on those in Image::Resize in the < href="https://metacpan.org/release/SHERZODR/Image-Resize-0.5/source/t/1.t"1.t</code> file which just checks the dimensions of the generated file. But that module doesn't crop images which is why I wanted to include tests of the actual output.

        It appears that copyResampled (and interpolation in general, see further) is unstable between versions and plagued with bugs

        I did originally use copy but changed to copyResampled when I decided it would be sensible to add the facility to change the size at the same time...