http://qs1969.pair.com?node_id=1226084

thechartist has asked for the wisdom of the Perl Monks concerning the following question:

I wrote a command line script to aid my review of trigonometry, and improve my understanding of Perl and its testing tools. My initial script accepts 2 sequences 3 integers (representing the degrees, minutes, and seconds of an angle. Positive and negative values are accepted), separated by any one of the basic arithmetic operations (+,-,*). It returns the correct arithmetic result in reduced form (ie. min or sec must be a positive integer from 0-59, with borrow/carry done appropriately).

Example input:
prompt> trg_calc 179 0 59 + 0 59 1 Reduced Answer: 180 0 0

My manual testing indicates it works as I intend. When I run perl -w and perl -c on the script, no problems occur. My goal was to write this with automated testing in mind. This is where I have problems. I am going by the example in the Perl Testing: A Developer's Notebook for testing scripts (vs. modules). FWIW, I have not read it in depth; I have skipped around trying to pick out what appears immediately useful. Here is my test file:

#!/usr/bin/env/perl use strict; use warnings; use Test::More 'no_plan'; my $test_path = "C:/Users/Greyhat/PDL_Old/trig/src"; require "$test_path/trig_calc.pl"; # Degree Addition Tests ok( main() eq 'Operation is undef.', '2 +. Tests invocation with empty arg list'); ok( main( '90 35 29 + 90 24 29' ) eq '180, 59, 58', '3 +. no carry needed' ); ok( main( '179 0 59 + 0 0 1' ) eq '179 1 0', '4 +. test if single carry works correctly' ); + ok( main( '179 0 59 + 0 59 1' ) eq '180 0 0', '5 +. tests if multiple carry works correctly' ); + ok( main( '-179 0 59 + 0 59 1' ) eq '-178 0 0 ', '6 +. test mainition with negatives' ); ok( main( '0 0 1 + 0 0 59' ) eq '0 1 0', '7 +. test if single reduce works correctly' ); ok( main( '90 180 270 + 0 180 90' ) eq '96 6 0', '8 +. tests if multiple reduce calls work correctly' );

With that, I get a host of uninstantiated variable errors, even for things that are declared in the source file, so it appears that my arguments are not making it all the way to the needed subroutine. My assumption is that calling the main function like this simulates the actions of someone invoking this from the command line. When I do use the script from the command line, it works as I expect, so there must be a gap in my understanding of the testing tools. The tests for reading the file in and the one for invoking with empty arguments, pass. But all of the other ones fail. Some of my concerns about the tests above:

  • 1. The expression is in quotes, when I would simply prefer to pass an array of numbers. I've tried that, but I still get syntax errors when I run the test file under perl -c trig_calc.t.
  • 2. How should I use return values in the *.pl file so the test tools have something to check? I am not exactly clear how the return values are used.
  • 3. How should I test for array equality in this particular case? I don't think numeric equality is correct here.
  • Thanks for the assistance.

    Replies are listed 'Best First'.
    Re: How to write testable command line script?
    by davido (Cardinal) on Nov 21, 2018 at 16:13 UTC

      In answer to the question "How to write testable command line script?", you do so by separating the view from the model, and you do THAT by putting the business logic (the algorithms, in this case) in a module that can be loaded up by your tests. If the model is small and the view (how input is received and how output is rendered) is small, you can do both in the same file in the form of a Modulino (See Mastering Perl, or have a look at https://perlmaven.com/modulino-both-script-and-module).

      If your presentation is tightly coupled with your algorithms (your business logic) then the first step is to get tests around the full life cycle of the script, and then begin refactoring to achieve the level of decoupling needed to facilitate more thorough unit testing.

      It does appear that your current script at least has a main() subroutine, but I suspect that your problem in how you are invoking main() is that you are passing all the fields as a single string, whereas the script is expecting each field to be an element passed by @ARGV. But we haven't seen the target code so that's mostly a guess. If your code looks like this:

      sub main { my @fields = scalar(@_) ? @_ : @ARGV; .... return @result_set; }

      Then at least main accepts a parameter list. If it does not, then you probably are only working with @ARGV, so you'll need to set @ARGV before each test call, and not bother passing args to main(). But if main() does take args, you're partway there, but probably need to pass them as a list rather than a single string. Also, presumably your script prints the result to STDOUT. But your tests are looking in the return value of main for a string. That likely is broken. You'll probably need to assure that results are always returned by main, and that you are testing the results as a list rather than just a string.

      This is where separating the view and controller from the model is important: You actually need two views -- one for when this is invoked from the command line, and one for when main() is called directly, because when main() is called directly you probably don't need to be sending output to STDOUT.

      If you absolutely must put output on STDOUT rather than as a return value, you can test that using Test::Output or Capture::Tiny.

      Update: I see that you have started down the path, and are passing @ARGV to main, so you're partway there.


      Dave

        Thanks for the input.

        I was working on the "modulino" approach first, before I start separating things out into modules. I have a few Perl scripts that I'd like to re-write in a more verifiable fashion, so I am jumping around the testing tools and docs to see what I can get done first.

        I'd also like to help out more, particularly in the PDL modules, but I have to learn a lot more about the testing tools before I could be useful to anyone.

        My questions regarding return values:

      • 1. How are return values propagated throughout the call chain? Ie. If I indirectly call my reduce sub through a number of other functions (in this case, I have the root main sub, an add sub and finally the reduce sub that calculates the final answer and exits. What should each of those subroutines return to make testing consistent?
      • 2. How do you deploy scripts written in the modular style you advocate? I assume the *.pm files are called by a top level *.pl file that acts like the file that holds the main function in a C program, if that makes sense. Ie. I write myscript.pl that imports subs and structures from any number of *.pm files. The user invokes myscript.pl and has no need to worry about any of the *.pm files if they are installed correctly.

          Each subroutine should be tested individually. Consider this contrived and silly example:

          use List::Util qw(sum); use Scalar::Util qw(looks_like_number); use Test::More; ok looks_like_number(1), 'Found a number.'; ok !looks_like_number('a'), 'Rejected a non-number.'; is_deeply [map {$_ + 1} (1,2,3,4)], [2,3,4,5], 'Correct mapping.'; is_deeply [grep {looks_like_number($_)} qw(a 1 b 2 c 3 d 4)], [1,2,3,4 +], 'Correct filter.'; cmp_ok sum(1,2,3,4), '==', 10, 'Sum was correct.'; cmp_ok sum_of_incremented_nums(qw(1 a 2 b c 3 d 4)), '==', 10, 'summed + dirty list properly.'; # Integration: sub sum_of_incremented_nums { return sum(map{$_+1} grep {looks_like_number($_)} @_); }

          Here we've tested (minimally) all the components individually, and then tested the thing that uses the components.

          How to deploy? A really simple way is to use the features of ExtUtils::MakeMaker. It can place your modules where modules live, and your executables where they're supposed to live on a given system. And the user is able to specify alternate locations based on environment settings and on how Perl was compiled and where it lives. You'll have a Makefile.PL that generates a makefile customized for your specific needs. The makefile will create the proper make directives, and you'll have 90% of what goes into a CPAN distribution when you're done. Consider any module on CPAN that bundles an executable script as part of the distribution as prior art. I haven't looked recently, but Carton, App::cpanoutdated, App::cpanminus, Devel::NYTProf, Perl::Critic, and Perl::Tidy are all examples of CPAN modules that bundle executables.

          That said, you might also consider a minimal packaging system like Carton. Or combine that with something like Docker where you have more control over the isolated environment.

          As for a structure, I typically do something like this:

          ./projectdir \ \ - projectdir/lib/ - projectdir/bin/ - projectdir/t/ - projectdir/xt/ - projectdir/README

          In your executable (projectdir/bin/foo) you might do something like this:

          #!/usr/bin/env perl use strict; use warnings; use FindBin qw($Bin); use lib "$Bin/../lib"; use MyModule; ...

          This works in situations where you aren't deploying the module to a location known to PERL5LIB and not known to some tool such as Carton.


          Dave

          If I understand you right, and you're interested in helping out with PDL, then joining the IRC channel and/or mailing lists shown on the PDL node is the best way to start.
    Re: How to write testable command line script?
    by GrandFather (Saint) on Nov 20, 2018 at 23:49 UTC

      It's likely that you are using global variables (as a general thing that is bad) that are not getting initialised before main is called. Try moving all global variables into main - that will break stuff and mean that you have to pass the variables to any sub that needs them, but that is a good thing!

      If that doesn't match with your code, show us the code or at least a stripped down version that demonstrates the issue.

      Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
        Here is the most important subroutine that I separated out when I was doing my experiments. Once I get the basics right, I know the tests could be organized more efficiently in a different structure. I was thinking of a hash of arrays. reduce.pl
        #!/usr/bin/env/perl use strict; use warnings; my @answer; #added to suppress warnings, but it still doesn't work. my ($a, $b, $c) = 0; print "0. Value of ARGV is @ARGV; Value of magic array var is @_.\n"; # Take list of arguments from any of the 4 other subroutines # In principle, should accept variable length arguments # and recursively reduce the items in list from right to left. # # Termination: @angle has 1 length. This is pushed onto @answer array +. # Case 1: reduce negative number by adding 60 to it, and # subtracting 1 from number on left. # Case 2: reduce positive number >= 60 by subtracting 60 and # adding 1 to number on left. sub reduce { # print "0. Value of magic array var is @_.\n"; my @angle = @_; # @_ = undef; my ($b, $c) = ($angle[-2], $angle[-1]); # reduce from end. # Warnings when running test script indicate $c in the if statement is + not defined. # But It is defined at the top, and should be defined if the @ARGV var +iable is being passed correctly. if ($c < 0 && scalar(@angle) > 1) { until ($c >= 0 && $c < 60) { $c += 60; $b -= 1; } unshift(@answer, $c); pop(@angle); @angle[-1] = $b; # Debug print statements print "2. b = $b, c = $c\n"; print "2. Angle array is @angle.\n "; print "2. Value of magic array var is @_.\n"; print "2. Values in answer array: @answer.\n"; #### &reduce(@angle); } elsif ($c >= 60 && scalar(@angle) > 1 ) { until ($c < 60 && $c >= 0) { $c -= 60; $b += 1; } unshift(@answer, $c); pop(@angle); @angle[-1] = $b; # Debug print statements print "3. b = $b, c = $c\n"; print "3. Angle array is @angle.\n "; print "3. Value of magic array var is @_.\n"; print "3. Values in answer array: @answer.\n"; #### &reduce(@angle); } elsif ( ($c >= 0 && $c < 60 ) && scalar(@angle) > 1) { unshift(@answer, $c); pop(@angle); &reduce(@angle); } else { unshift(@answer, @angle); print "Reduced answer: @answer \n"; } return $answer; } main( @ARGV ) unless caller(); sub main { &reduce( @ARGV ); }
        Test code:
        #!/usr/bin/env/perl use strict; use warnings; use Test::More 'no_plan'; my $test_path = "C:/Users/Greyhat/PDL_Old/trig/src"; ok( require( "$test_path/reduce.pl" ), 'Load file correctly.' ) or ex +it; my @test_2 = undef; my $answer_2 = 1; my $note_2 = "undef | $answer_2 | 2. Call with no value."; my @test_3 = (180, 59, 58); my @answer_3 = (180, 59, 58); my $note_3 = "@test_3 | @answer_3 | 3. already reduced"; my @test_4 = (179, 0, 60); my @answer_4 = (179, 1, 0); my $note_4 = "@test_4 | @answer_4 | 4. test if single carry works +correctly"; my @test_5 = (179, 59, 60) ; my @answer_5 = (180, 0, 0) ; my $note_5 = "@test_5 | @answer_5 | 5. tests if multiple carry wor +ks correctly"; my @test_6 = (-179, 60, 0); my @answer_6 = (-178, 0, 0); my $note_6 = "@test_6 | @answer_6 | 6. test addition with negative +s"; my @test_7 = (0, 0, -60); my @answer_7 = (-1, 59, 2); my $note_7 = "@test_7 | @answer_7 | 7. test negative borrow works +correctly"; my @test_8 = (90, 360, 360); my @answer_8 = (96, 6, 0); my $note_8 = "@test_8 | @answer_8 | 8. tests if multiple reduce ca +lls work correct"; # Degree Reduction Tests # Similar problems regardless of how I call the function. Neither tes +ting via main() or directly calling reduce() # are effective. ok( reduce() == $answer_2, $note_2 ); ok( reduce( @test_3 ) == @answer_3, $note_3 ); + ok( reduce( @test_4 ) == @answer_4, $note_4 ); + ok( reduce( @test_5 ) == @answer_5, $note_5 ); + ok( reduce( @test_6 ) == @answer_6, $note_6 ); + ok( reduce( @test_7 ) == @answer_7, $note_7 ); ok( reduce( @test_8 ) == @answer_8, $note_8 ); # Degree Subtraction Tests # Degree Multiplication Tests # Degree Division Tests # More Complicated expressions with

          First up: never just add stuff to suppress warnings or errors. Figure out what the cause is and fix the cause!

          In this case the warning was telling you something you needed to know. Lets boil the code down a little to demonstrate the issue:

          #!/usr/bin/env/perl use strict; use warnings; reduce(1); sub reduce { my @array = @_; print $array[-2]; }

          Prints:

          Use of uninitialized value in print at ...\noname1.pl line 10.

          Changing the call to reduce(1, 2); works as expected. The actual problem is you are passing only one argument and using that to populate the array. You then try to access the array expecting two arguments.

          Note that much of your code is fairly old school. In particular don't use & to call functions - it doesn't do what you expect. Also, don't use @array[$idx]. If you want a single element use $array[$id] - $ instead of @.

          Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond

          In line with your code here, this is my take on what one might want to see in an exporting module and its associated .t file. Note that it's not even necessary to go to the trouble of exporting stuff if you're willing to use fully-qualified subroutine names, e.g., TrigCalc::reduce(...);, in the client code — very convenient for smallish, quick-and-dirty modules (update: and still perfectly testable with Test::*). You can see that using a .t file to help specify a program and identify problem behaviors can be very helpful to all concerned.

          The  reduce() function in the .pm file is defined in terms of two non-exported functions. These two functions could easily have been folded into reduce() (it might even have simplified things a bit), but leaving them separate helps document the behavior of the parent function and also provides the opportunity to independently test these two component behaviors even though the functions embodying them are not exported.

          TrigCalc.pm:

          TrigCalc.t (note that I've used a different result for the  (0, 0, -60) test case):

          Update 1: Forgot to include the output of the script. Not really necessary, but I usually do so what the heck...
          Output:

          Update 2: Here's a version of  reduce() with all component behaviors folded into the function. It handles incomplete argument lists better IMHO, also handles undef-s more gracefully. Fully tested.

          sub reduce { # reduce from rightmost end of argument list. # left-pad input arguments with 0s to avoid inaccessible items. my ($degs, # degrees $mins, # minutes $secs, # seconds ) = map $_ || 0, (0, 0, 0, @_)[ -3 .. -1 ]; # handles undefs use integer; $secs = $degs * SECS_PER_DEGREE + $mins * SECS_PER_MINUTE + $secs; $degs = $secs / SECS_PER_DEGREE; $secs %= SECS_PER_DEGREE; $mins = $secs / SECS_PER_MINUTE; $secs %= SECS_PER_MINUTE; return $degs, $mins, $secs; }
          (If you have Perl version 5.10+, the || operator in the  $_ || 0 map expression can be changed to the // defined-or operator.)


          Give a man a fish:  <%-{-{-{-<

    Re: How to write testable command line script?
    by 1nickt (Canon) on Nov 20, 2018 at 23:36 UTC

      Hi, does your subroutine expect a string or a list of args?

      Edit: Oh, I see you say it does. To pass the args:

      ok( main(90,180,270,'+',0,180,90) ...

      If the sub returns the list of numbers, test with is_deeply() from Test::More. If it returns a string, use is().

      Hope this helps!



      The way forward always starts with a minimal test.

        Thanks for the input re: Test::Deeply. I thought I could test this easily with Test::Simple, or Test:More. I still have a lot to learn about the testing modules.

          Hi, just to clarify, both testing methods I mentioned, is() and is_deeply() are from Test::More. The latter is often enough for comparing two arrays, for example, but if things get more complex then indeed you may want to move up to Test::Deep, which is much more flexible and powerful, providing cmp_deeply(), cmp_bag(), ignore() and other useful tools.

          Or, you can do as I do and simply always use:

          use Test::Most 'die'; # tests done_testing; __END__
          ... because Test::Most provides Test::Deep as well as several other useful libraries (and exports 'die' and loads strict and warnings).

          You are right; there are a lot of testing libraries and tools (and you can make your own!). Using them is one of the most satisfying parts of Perl software development, for me.

          Hope this helps!


          The way forward always starts with a minimal test.
    Re: How to write testable command line script?
    by perlancar (Hermit) on Nov 22, 2018 at 01:29 UTC

      In line with what davido said, I would also suggest separating your "business logic" from your "plumbing code", where the plumbing code is the environment-specific code to get your business logic code to work (in the case of a command-line script, the plumbing code includes code to parse command-line options and arguments, to respond to --help or --version, to output the result, etc).

      I'll use my framework Perinci::CmdLine for illustration here:

      ### file: trig_calc1.pl #!/usr/bin/env perl use strict; use warnings; our %SPEC; $SPEC{trig_calc} = { v => 1.1, args => { degree1 => {schema=>'int*', req=>1, pos=>0}, minute1 => {schema=>'int*', req=>1, pos=>1}, second1 => {schema=>'int*', req=>1, pos=>2}, op => {schema=>'str*', req=>1, pos=>3}, degree2 => {schema=>'int*', req=>1, pos=>4}, minute2 => {schema=>'int*', req=>1, pos=>5}, second2 => {schema=>'int*', req=>1, pos=>6}, }, args_as => 'array', result_naked => 1, }; sub trig_calc { my ($d1, $m1, $s1, $op, $d2, $m2, $s2) = @_; if ($op eq '+') { my ($dr, $mr, $sr) = (0, 0, 0); $sr += $s1 + $s2; if ($sr >= 60) { $mr += int($sr/60); $sr = $sr % 60 } $mr += $m1 + $m2; if ($mr >= 60) { $dr += int($mr/60); $mr = $mr % 60 } $dr += $d1 + $d2; return [$dr, $mr, $sr]; } else { die "Unknown operation '$op'"; } } if ($ENV{HARNESS_ACTIVE}) { require Test::More; Test::More->import; is_deeply(trig_calc(90, 35, 29, '+', 90, 24, 29), [180, 59, 58]); is_deeply(trig_calc(90, 35, 29, '+', 90, 24, 31), [181, 0, 0]); done_testing(); } else { require Perinci::CmdLine::Any; Perinci::CmdLine::Any->new(url => '/main/trig_calc')->run; }

      To run the script on the command-line:

      % perl trig_calc1.pl 90 35 29 + 90 24 29
      180
      59
      58

      The framework happens to handle most of the plumbing code for you, like parsing command-line arguments and outputting the result. You just need to write your business logic code as a regular Perl function, accepting arguments in @_ and returning result. As a bonus, the framework also generates usage if user specifies --help, generates version information (--version), checks arguments, outputs result as JSON (--json), as well as a few other things.

      To test the script, use prove which is the standard harness that comes with perl:

      % prove trig_calc1.pl
      trig_calc1.pl .. ok
      All tests successful.
      Files=1, Tests=2,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.04 cusr  0.00 csys =  0.06 CPU)
      Result: PASS

      Framework-specific information: As an alternative to specifying tests directly with is_deeply() et al, with the framework you can also put the tests as examples in the function metadata. The examples can be tested using the Test::Rinci module. The benefits of speciying the tests as examples include having the examples shown in --help message as well as generated POD documentation.

      ### file: trig_calc2.pl #!/usr/bin/env perl use strict; use warnings; our %SPEC; $SPEC{trig_calc} = { v => 1.1, args => { degree1 => {schema=>'int*', req=>1, pos=>0}, minute1 => {schema=>'int*', req=>1, pos=>1}, second1 => {schema=>'int*', req=>1, pos=>2}, op => {schema=>'str*', req=>1, pos=>3}, degree2 => {schema=>'int*', req=>1, pos=>4}, minute2 => {schema=>'int*', req=>1, pos=>5}, second2 => {schema=>'int*', req=>1, pos=>6}, }, args_as => 'array', result_naked => 1, examples => [ { argv => [90, 35, 29, '+', 90, 24, 29], result => [180, 59, 58], }, { argv => [90, 35, 29, '+', 90, 24, 31], result => [181, 0, 0], }, ], }; sub trig_calc { my ($d1, $m1, $s1, $op, $d2, $m2, $s2) = @_; if ($op eq '+') { my ($dr, $mr, $sr) = (0, 0, 0); $sr += $s1 + $s2; if ($sr >= 60) { $mr += int($sr/60); $sr = $sr % 60 } $mr += $m1 + $m2; if ($mr >= 60) { $dr += int($mr/60); $mr = $mr % 60 } $dr += $d1 + $d2; return [$dr, $mr, $sr]; } else { die "Unknown operation '$op'"; } } if ($ENV{HARNESS_ACTIVE}) { require Test::More; Test::More->import; require Test::Rinci; Test::Rinci->import; metadata_in_module_ok('main', {load=>0}); done_testing(); } else { use Perinci::CmdLine::Any; Perinci::CmdLine::Any->new(url => '/main/trig_calc')->run; }

      As the script grows, it is recommended to split the business logic to its own module(s) under lib/ (e.g. lib/TrigCalc.pm) and the tests into their separate *.t files in the t/ subdirectory (e.g. t/trig_calc.t). Your script itself is now put in script/ (e.g. script/trig_calc):

      ### file: lib/TrigCalc.pm package TrigCalc; use strict; use warnings; our %SPEC; $SPEC{trig_calc} = { v => 1.1, args => { degree1 => {schema=>'int*', req=>1, pos=>0}, minute1 => {schema=>'int*', req=>1, pos=>1}, second1 => {schema=>'int*', req=>1, pos=>2}, op => {schema=>'str*', req=>1, pos=>3}, degree2 => {schema=>'int*', req=>1, pos=>4}, minute2 => {schema=>'int*', req=>1, pos=>5}, second2 => {schema=>'int*', req=>1, pos=>6}, }, args_as => 'array', result_naked => 1, }; sub trig_calc { my ($d1, $m1, $s1, $op, $d2, $m2, $s2) = @_; if ($op eq '+') { my ($dr, $mr, $sr) = (0, 0, 0); $sr += $s1 + $s2; if ($sr >= 60) { $mr += int($sr/60); $sr = $sr % 60 } $mr += $m1 + $m2; if ($mr >= 60) { $dr += int($mr/60); $mr = $mr % 60 } $dr += $d1 + $d2; return [$dr, $mr, $sr]; } else { die "Unknown operation '$op'"; } } 1;
      ### file: script/trig_calc #!/usr/bin/env perl use strict; use warnings; use TrigCalc; use Perinci::CmdLine::Any; Perinci::CmdLine::Any->new(url => '/TrigCalc/trig_calc')->run;
      ### file: t/trig_calc.t #!perl use strict; use warnings; use TrigCalc; is_deeply(TrigCalc::trig_calc(90, 35, 29, '+', 90, 24, 29), [180, 59, +58]); is_deeply(TrigCalc::trig_calc(90, 35, 29, '+', 90, 24, 31), [181, 0, + 0]); done_testing();

      This structure is the standard Perl distribution structure and there are tools to help you create the Perl distribution to upload to CPAN, manage dependencies, and so on.

      You test the application using: prove -l (which will run all the t/*.t files), run the application using: perl script/trig_calc but after the distribution is installed, you can just run using trig_calc because the script will be installed to your PATH.

        Thanks! I will have to study this example more closely. I had planned on some revisions as I extend the calculator to have more functionality that you had incorporated into your example.

    Re: How to write testable command line script?
    by eyepopslikeamosquito (Archbishop) on Nov 22, 2018 at 06:29 UTC

        Thanks to the numerous examples, along with some deep thought, I have some success to report!

        While it isn't the prettiest code, my reduce subroutine passes the automated tests. There are still some issues to work out, but the main problems I solved were:
      • 1. Picking which test subroutine to use. I ended up using is_deeply, based on the post by AnomalousMonk. With that example, and some study of the Perl documentation, I figured out how to pass the test data to the subroutine correctly.
      • 2. Due to the assumptions I made when initially writing the script, I had hoped returning the entire array would have been sufficient. I had the issue that the array that was returned contained values from prior calls that are generally non-existent when I use the command line. This affected the structure equality comparisons. I ended up simply returning a list of scalars, which corrected the problem.
      • I will likely re-write this and extend it using some of the suggestions of Perlancer. But the initial goal of getting the tests to run has been met.

        reduce.t

        #!/usr/bin/env/perl use strict; use warnings; use Test::More 'no_plan'; my $test_path = "C:/Users/Greyhat/PDL_Old/trig/src"; ok( require( "$test_path/reduce.pl" ), 'Load file correctly.' ) or ex +it; # Not implemented #my $test_2 = [0,0,0]; #my $answer_2 = [0,0,0]; #my $note_2 = "@$test_2 | @$answer_2 | 2. Call with no value."; my $test_3 = [180, 59, 58]; my $answer_3 = [180, 59, 58]; my $note_3 = "@$test_3 | @$answer_3 | 3. already reduced"; my $test_4 = [19, 0, 60]; my $answer_4 = [19, 1, 0]; my $note_4 = "@$test_4 | @$answer_4 | 4. test if single carry wo +rks correctly"; my $test_5 = [179, 59, 60]; my $answer_5 = [180, 0, 0]; my $note_5 = "@$test_5 | @$answer_5 | 5. tests if multiple carry + works correctly"; my $test_6 = [-179, 60, 0]; my $answer_6 = [-178, 0, 0]; my $note_6 = "@$test_6 | @$answer_6 | 6. test addition with nega +tives"; my $test_7 = [ 0, 0, -60]; my $answer_7 = [-1, 59, 0]; my $note_7 = "@$test_7 | @$answer_7 | 7. test negative borrow wo +rks correctly"; my $test_8 = [90, 360, 360]; my $answer_8 = [96, 6, 0]; my $note_8 = "@$test_8 | @$answer_8 | 8. tests if multiple reduc +e calls work correct"; sub main { reduce( @ARGV ); } # Degree Reduction Tests # Need to pass array refs to is_deeply #ok( &main() == $answer_2, $note_2 ); # From perl monks test code # is_deeply [ reduce(@$ar_args) ], $ar_expected, $full_comment; is_deeply [ reduce(@$test_3 ) ], $answer_3, $note_3 ; + is_deeply [ reduce(@$test_4 ) ], $answer_4, $note_4 ; + is_deeply [ reduce(@$test_5 ) ], $answer_5, $note_5 ; + is_deeply [ reduce(@$test_6 ) ], $answer_6, $note_6 ; + is_deeply [ reduce(@$test_7 ) ], $answer_7, $note_7 ; + is_deeply [ reduce(@$test_8 ) ], $answer_8, $note_8 ; # Degree Subtraction Tests # Degree Multiplication Tests # Degree Division Tests
        reduce.pl
        #!/usr/bin/env/perl use strict; use warnings; my ($a, $b, $c) = 0; print "0. Value of ARGV is @ARGV; Value of magic array var is @_.\n"; # Take list of arguments from any of the 4 other subroutines # In principle, should accept variable length arguments # and recursively reduce the items in list from right to left. # # Termination: @angle has 1 length. This is pushed onto @answer array +. # Case 1: reduce negative number by adding 60 to it, and # subtracting 1 from number on left. # Case 2: reduce positive number >= 60 by subtracting 60 and # adding 1 to number on left. # # NOTE 1: CHECK BRACKETS AROUND UNTIL LOOP! MAY NEED TO MOVE CLOSING +BRACKET! # NOTE 2: Add check to test for length of @angle array! Then it shou +ld work. # NOTE 3: Add elsif to test for case where $c is ok value but scalar( +@angle) > 1. # Just push value to answer array. # NOTE 4: Does not handle multiples of 60 correctly. Likely scope is +sue. my @answer; sub reduce { # print "0. Value of magic array var is @_.\n"; my @angle = @_; # @_ = undef; my ($b, $c) = ($angle[-2], $angle[-1]); # reduce from end. if ($c < 0 && scalar(@angle) > 1) { until ($c >= 0 && $c < 60) { $c += 60; $b -= 1; } unshift(@answer, $c); pop(@angle); @angle[-1] = $b; # Debug print statements print "2. b = $b, c = $c\n"; print "2. Angle array is @angle.\n "; print "2. Value of magic array var is @_.\n"; print "2. Values in answer array: @answer.\n"; #### &reduce(@angle); } elsif ($c >= 60 && scalar(@angle) > 1 ) { until ($c < 60 && $c >= 0) { $c -= 60; $b += 1; } unshift(@answer, $c); pop(@angle); @angle[-1] = $b; # Debug print statements print "3. b = $b, c = $c\n"; print "3. Angle array is @angle.\n "; print "3. Value of magic array var is @_.\n"; print "3. Values in answer array: @answer.\n"; #### &reduce(@angle); } elsif ( ($c >= 0 && $c < 60 ) && scalar(@angle) > 1) { unshift(@answer, $c); pop(@angle); &reduce(@angle); } else { unshift(@answer, @angle); print "Reduced answer: @answer \n"; return $answer[0], $answer[1], $answer[2]; } } main( @ARGV ) unless caller(); sub main { reduce( @_ ); }

          ... the initial goal of getting the tests to run has been met.

          I'm looking at your  $test_7: (0, 0, -60) -> (-1, 59, 0) (result: -1° 59' 0"). (There's a similar test 7 here with result (-1, 59, 2).) I would have thought the normalized or reduced result to be  (0 -1 0) (0° -1' 0"). If the required result shown in the code is correct, it suggests a set of sign propagation rules I have yet to see. Can you expand on this?

          I notice that you haven't recast your core code into a module yet. Doing so is a good idea for many reasons, including testing. Note that Test::More::use_ok() is available for modules and Test::More::require_ok() for more humble files.

          The other point that occurred to me is also in terms of general design. Rather than always reduce()-ing deg/min/sec tuples to other d/m/s tuples, it might be less of a headache to normalize d/m/s tuples to, say, integer or decimal fraction arc-seconds (or maybe radians?) or whatever's most convenient, do all the trig in these standard units, then convert back to the ultimate d/m/s (or whatever) output form just once as a final step. Just a thought...


          Give a man a fish:  <%-{-{-{-<