http://qs1969.pair.com?node_id=1227454

Good morning nuns and monks,

Nothing to read during next holidays? ;=)

I wrote this tutorial and I'll really appreciate your comments and corrections.

The following tutorial is in part the fruit of a learn by teaching process, so before pointing newbies to my work I need some confirmations.

The tutorial is a step by step journey into perl module development with tests, documentation and git integration. It seemed to me the very minimal approach in late 2018.

Being a long post (over the 64kb perlmonks constraint) the second part is in a reply to this node and because of this, the table of content links are broken for the second part (I'll fix them sooner or later).

This material is already on its github repository with a long name (I hope can be easier to find). Also the code generated in this tutorial has its own archived repository.

I'll gladly accept comments here or as pull request (see contributing), as you wish, about:

The module code presented is.. fairly semplicistic and I do not plan to change it: the tutorial is about all but coding: tests, documentation, distribution and revision control are the points of this guide and I tried to keep everything as small as possible. If you really cannot resist to rewrite the code of the module, rewrite it all and I can add a TIMTOWTDI section, just for amusement.

By other hand the day eight: other module techniques has room for improvements and additions: if you want to share your own tecquiques about testing, makefile hacking, automating distribution I think this is the place. I choosed module-starter to sketch out the module as it seemed to me simple and complete, but it has some quirks. Other tools examples can be worth another day of tutorial, but keep it simple.

When you have commented this tutorial I'll remove the [RFC] in the title and I'll point newcomers to this guide (or better if it will be reposted in other section?), if you judged it is worth to read.

Thanks!

L*

UPDATE 20 Dec. Added a readmore tag around the below content. The online repository is receiving some pull requests ;) so I added a version number to the doc. Tux is very busy but pointed me to Release::Checklist and I'l add it to the tutorial.


Discipulus's step by step tutorial on module creation with tests and git

day zero: introduction

day one: prepare the ground

day two: some change and tests

day three: finally some code

day four: the PODist and the coder

day five: deeper tests

day six: testing STDERR

day seven: the module is done but not ready

day eight: other module techniques

bibliography

acknowledgements

day zero: introduction

foreword

This tutorial is not about coding: that's it! The code, idea and implementation presented below are, by choice, futile, piffling and trifling ( the module resulting following this tutorial is available, archived on its own repository).

This tutorial, by other hands, tries to show to the beginner one possible path in module creation. As always in perl there are many ways to get the job done and mine is far to be the optimal one, but as I have encountered many difficulties to choice my own path, perhaps sharing my way can help someone else.

There are other similar but different source of knowledge about module creation, notably José's Guide for creating Perl modules: read this for some point i do not explore (well, read it anyway: it's worth to)

the bag of tools

As for every duty, check your equipment before starting. You probably already have perl, a shell (or something less fortunate if you are on windows, like me ;) and a favourite text editor or IDE.

But here in this tutorial we'll use git in the command line and github to store our work in a central point (very handy feature). So get a github account and a git client.

This tutorial will focus on the importance (I'd say preminence or even predominance) of testing while developing a perl module. I wrote lonely scripts for years then I realized that even if my script seemed robust, I have no way to test them in a simple and reliable way.

So we will use the core module Test::More and the CPAN one Test::Exception in our module so get it installed using your cpan or cpanm client. Take a look to Test::Simple if you are not used to test.

We also use the core module Carp to report errors from user point of view.

We use Module::Starter to have the skeleton of our module done for us, but, as always there are valid alternatives. Install it.

We'll document our module using POD (Plain Old Documentation) see perlpod for reference.

the plan

Some of your programs or modules work on lists and arrays. Functions inside these programs accept ranges but while you intend what a valid ranges is ( 0,1,2 or 0..2 ) you discover that your programs crashed many times because other humans or other programs passed ranges like: 0,1..3,2 (where 2 is present twice) or 3,2,1 (and your application is silently expecting 1,2,3 ) or 9..1 or even 0,1,good,13..15 not being a range at all, or simply 1-3 being a range for the user but not for you perl code that read it as -2.

Bored of the situation you plan a new module to validate ranges. Range::Validator is the name you choose. Your initial plan is to expose just one sub: validate

As in masonry, you need a well prepared plan before starting excavations. Then you need points and lines drawn on the terrain: everything that makes the job complex is part of the job itself.

Look around: you can bet someone else got your same idea before you. You can also bet he or she was smarter than you and it already uploaded it to CPAN.

Sharing early is a good principle: if you already have an idea of your module (even before implementing it), can be worth to ask in a forum dedicated to Perl (like perlmonks.org) posting a RFC post (Request For Comments) or using the dedicated website prepan.org(is not a crowdy place nowadays..;).

Plan it well: it is difficult, but remember that to repair something bad planned is always a worst task. The basic read is in the core documentation: perlnewmod is the place to start and perlmodstyle is what comes next. Dont miss the basic documentation.

If you want to read more see, in my bibliotheca, the scaffold dedicated to modules.

Choose carefully all your names: the module one and names of methods or functions your module exports: good code with bad named methods is many times unusable by others than the author.

Programming is a matter of interfaces. sic. dot. Coding is easy engineering is hard. sic. another dot. You can change a million of times the implementation, you can never change how other people use your code. So plan well what you offer with your module. You can add in the future new features; you cannot remove not even one of them because someone is already using it in production. Play nice: plan well.

You can profit the read of a wonderful post: On Interfaces and APIs

day one: prepare the ground

step 1) an online repository on github

Create an empty repository on the github server named Range-Validator (they do not accept :: in names) see here for instruction

step 2) a new module with module-starter

Open a shell to your scripts location and run the program module-starter that comes within Module::Starter It wants a mail address, the author name and, obviously the module name:

shell> module-starter --module Range::Validator --author MyName --emai +l MyName@cpan.org Added to MANIFEST: Changes Added to MANIFEST: ignore.txt Added to MANIFEST: lib/Range/Validator.pm Added to MANIFEST: Makefile.PL Added to MANIFEST: MANIFEST Added to MANIFEST: README Added to MANIFEST: t/00-load.t Added to MANIFEST: t/manifest.t Added to MANIFEST: t/pod-coverage.t Added to MANIFEST: t/pod.t Added to MANIFEST: xt/boilerplate.t Created starter directories and files
A lot of work done for us! The module-starter program created all the above files into a new folder named Range-Validator let's see the content:

---Range-Validator | Changes | ignore.txt | Makefile.PL | MANIFEST | README | |---lib | | ---Range | Validator.pm | |----t | 00-load.t | manifest.t | pod-coverage.t | pod.t | |----xt boilerplate.t
We now have a good starting point to work on. Spend some minute to review the content of the files to get an idea.

step 3) a local repository with git

Open another shell for the git client (I prefer to have two, feel free to use just one) to the same path of the above created folder and initialize a git repository (local for the moment) there:

git-client> git init Initialized empty Git repository in /path/to/Range-Validator/.git/
Nothing impressive.. What happened? The above command created a .git directory, ~15Kb of infos, to take track of all changes you'll make to your files inside the Range-Validator folder. In other words it created a git repository. Empty. Empty?!? And all my files?

It's time for a command you'll use many, many times: git status

git-client> git status On branch master No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) Changes MANIFEST Makefile.PL README ignore.txt lib/ t/ xt/ nothing added to commit but untracked files present (use "git add" to +track)
Many terms in the above output would be worth to be explained, but not by me. Just be sure to understand what branch, commit, tracked/untracked means in the git world. Luckily the command is so sweet to add a hint for us as last line: (use "git add" to track)

Git is built for this reason: it can track all modifications we do to code base and it take a picture (a snapshot in git terminology) of the whole code base everytime we commit these changes. But git init initialized an empty repository: we must tell git which files to add to tracked ones.

We simply want to track all files module-starter created for us: git add . add the current directory and all its content to tracked content. Run it and check the status again:

git-client> git add . git-client> git status On branch master No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: Changes new file: MANIFEST new file: Makefile.PL new file: README new file: ignore.txt new file: lib/Range/Validator.pm new file: t/00-load.t new file: t/manifest.t new file: t/pod-coverage.t new file: t/pod.t new file: xt/boilerplate.t
We added all content but we still have not committed anything! git commit -m "some text" will commit all changes using the message provided as a label for the commit (without -m git will open a text editor to enter the text). Run it and recheck the status again:

git-client> git commit -m "module-starter created content" [master (root-commit) 1788c12] module-starter created content 11 files changed, 409 insertions(+) create mode 100644 Changes create mode 100644 MANIFEST create mode 100644 Makefile.PL create mode 100644 README create mode 100644 ignore.txt create mode 100644 lib/Range/Validator.pm create mode 100644 t/00-load.t create mode 100644 t/manifest.t create mode 100644 t/pod-coverage.t create mode 100644 t/pod.t create mode 100644 xt/boilerplate.t git-client> git status On branch master nothing to commit, working tree clean
With the above we committed everything. The status is now working tree clean what better news for a lumberjack used to examine daily tons of dirty logs? ;)

Now we link the local copy and the remote one on github: all examples you find, and even what github propose to you, tell git remote add origin https://github.com/... where origin is not a keyword but just a label, a name: I found this misleading and I use my github name in this place or something that tell me the meaning, like MyFriendRepo. So from now on we will use YourGithubLogin there.

Add the remote and verify it ( with -v ):

git-client> git remote add YourGithubLogin https://github.com/YourGith +ubLogin/Range-Validator.git git-client> git remote -v YourGithubLogin https://github.com/YourGithubLogin/Range-Validat +or.git (fetch) YourGithubLogin https://github.com/YourGithubLogin/Range-Validat +or.git (push)
The verify operation gives us two hints: for the remote repository that we call YourGithubLogin we can do fetch (import all changes you still have not, from the remote repository to your local copy) or push (export your local copy to the remote repository).

Since on github there is nothing and locally we have the whole code base, we definitively want to push and we can do that if and only if, we have the permission in the remote repository. It's our own repository, so no problem (git will ask for the github password). The push wants to know which branch to push: we only have master so:

git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': *********** Counting objects: 17, done. Delta compression using up to 4 threads. Compressing objects: 100% (14/14), done. Writing objects: 100% (17/17), 5.33 KiB | 303.00 KiB/s, done. Total 17 (delta 1), reused 0 (delta 0) remote: Resolving deltas: 100% (1/1), done. remote: remote: Create a pull request for 'master' on GitHub by visiting: remote: https://github.com/YourGithubLogin/Range-Validator/pull/n +ew/master remote: To https://github.com/YourGithubLogin/Range-Validator.git * [new branch] master -> master
Go to the github website to see what happened: the whole code base is in the online repository too, updated to our last commit (aka our first, unique commit for the moment). From now on we can work on our code from any machine having a git client. To do so we must be diligent and committing and pushing our changes when is the moment, to maintain the online repository up to date. Clean yard, happy master mason.

A whole day is passed, well.. two days, and we did not wrote a single line of perl code: we are starting the right way! Time to go to sleep with a well prepared playground.

day two: some change and tests

step 1) POD documentation

Well first of all some cleaning: open you local copy of the module /path/to/Range-Validator/lib/Range/Validator.pm in your text editor or IDE. Personally I like the POD documentation to be all together after the __DATA__ token rather interleaved with the code. Inside the code I only like to have comments. POD documentation is for the user, comments are for you! After a week or month you'll never remember what your code is doing: comment it explaining what is passing.

So go to the end of the module where the line is the final 1; ( remember all modules must have a true return value as last statement) and place, in a new line the __DATA__ token. Move all POD after the token. Also cancel the POD and the code relative to function2

Then rename function1 into validate and change accordingly the name of the POD section too.

Modify the POD part =head1 NAME with a more humble and meaning description: Range::Validator - a simple module to verify array and list ranges

Change the =head1 SYNOPSIS part too, removing unneeded text and changing code lines ( see below ): we do not do an object oriented module, so no new method for us. You plan to accept both real ranges and strings representing ranges.

So, if you followed me, the module must look like:

package Range::Validator; use 5.006; use strict; use warnings; our $VERSION = '0.01'; sub validate { } 1; __DATA__ =head1 NAME Range::Validator - a simple module to verify array and list ranges =head1 VERSION Version 0.01 =cut =head1 SYNOPSIS use Range::Validator; my @range = Range::Validator->validate(0..3); # a valid range my @range = Range::Validator->validate(0..3,2); # a overlapping + range my @range = Range::Validator->validate('1,3,7'); # a valid range + passed as a string my @range = Range::Validator->validate('1,XXX,3'); # an invalid ra +nge passed as a string # more POD ...
Ok? Let's check our new POD is correct: open the shell in the directory created yesterday /path/to/Range-Validator and run the following command:  perldoc ./lib/Range/Validator.pm

Review the POD. It must be ok.

step 2) first test

Now we test if the module syntax is correct. The first simple method is a short one liner using the perl option  -I to include ./lib in @INC and  -MRange::Validator to use our module( see perlrun and perlvar ):

shell> perl -I ./lib -MRange::Validator -e 1 shell>
No errors: good! the module can be used and has no syntax errors. But.. one moment: we want to try out all our features, and we plan to add many, using one liners? Are we mad?! No; we will use tests.

Tests are wonderful in perl and planning good tests (a test suite) will save a lot of time in the future and makes your code maintainable. The time you invest writing tests while coding will save a lot of time in the future when you modify the code base. I'm not a theoric of software writing nor an orthodox of test driven development, but to write tests while you code is a very good practice. You can even write tests before coding ie: you write something that test a wanted behaviour, you run it expecting a failure, then you write the code that make the test happy. This is up to you.

What is not a choice is having no test suite or writing all tests at the end of code development. No.

In the day one we used module-starter to produce a skeleton of our module. module-starter was so kind to write a bounch of tests for us in the standard directory /t (ie tests). Tests are run normally during the installation (sorted by their names) of the module but, as we already said, they are the main source of serenity for us as developers. So let's see what module-starter wrote inside /t/00-load.t

#!perl -T use 5.006; use strict; use warnings; use Test::More; plan tests => 1; BEGIN { use_ok( 'Range::Validator' ) || print "Bail out!\n"; } diag( "Testing Range::Validator $Range::Validator::VERSION, Perl $], $ +^X" );
This perl program use strict and wanrings (you already know they are friends, do you?) then load the core module Test::More which generally requires that you declare how many tests you intend to run ( plan tests => 1 ) then inside the BEGIN block use its method use_ok that loads our own module and in case of failure print "Bail out!\n" aka "everything went wrong, leave the boat".

If the above succeeded Test::More calls diag that emits a note with the text specified, useful to have while reviewing test output. The module also has the note method that I prefer. Go to the module documentation to have an idea of Test::More

So, instead of the one liner we can safely call this test:

shell> perl -I ./lib ./t/00-load.t "-T" is on the #! line, it must also be used on the command line at ./ +t/00-load.t line 1.
The test crash because of the -T that turns taint mode on. Taint mode is base of the security in perl, but for the moment we do not need it enabled, so we remove from the shebang line which will result in #!perl (read about taint mode in the official perl documentation: perlsec).

(Note that removing -T switch is not the best thing to do: perl -T -I ./lib ./t/00-load.t is by far a better solution).

After this change the test will run as expected:

shell> perl -I ./lib ./t/00-load.t ok 1 - use Range::Validator; 1..1 # Testing Range::Validator 0.01, Perl 5.026000, /path/to/my/perl
Wow! we run our first test! ..yes, but in the wrong way. Well not exactly the wrong way but not the way tests are run during installation. Test are run through a TAP harness (TAP stands for Test Anything Protocol and is present in perl since ever: perl born the right way ;).

With your perl distribution you have the prove command (see its documentation) that run tests through a TAP harness. So we can use it.

We can call prove the very same way we called perl: prove -I ./lib ./t/00-load.t but we are lazy and we spot prove -l which has the same effect of prove -I ./lib ie include ./lib in @INC

Run the very same test through prove instead that perl and you will see a slightly different output:

shell> prove -l ./t/00-load.t ./t/00-load.t .. 1/? # Testing Range::Validator 0.01, Perl 5.026000, +/path/to/my/perl ./t/00-load.t .. ok All tests successful. Files=1, Tests=1, 0 wallclock secs ( 0.01 usr + 0.02 sys = 0.03 CPU +) Result: PASS
Basically the output includes some statistics and the count of test files processed and the overall number of tests. Also note that the message emitted by diag is in another place: diagnostics by Test::More goes to STDERR (which is buffered differently in respect of STDOUT but this is another story..) and TAP aggregates tests results and prints them to STDOUT

Finally we have the developer gratification: Result: PASS indicating all went well.

The prove program promotes laziness and without argument (as a test file in the previous example) runs automatically every test file found under /t folder: this is the same behaviour you will have during an effective module installation:

shell> prove -l t\00-load.t ....... 1/? # Testing Range::Validator 0.01, Perl 5.026000 +, /path/to/my/perl t\00-load.t ....... ok t\manifest.t ...... skipped: Author tests not required for installatio +n t\pod-coverage.t .. skipped: Author tests not required for installatio +n t\pod.t ........... skipped: Author tests not required for installatio +n All tests successful. Files=4, Tests=1, 1 wallclock secs ( 0.06 usr + 0.02 sys = 0.08 CPU +) Result: PASS

step 3) commit changes with git

Ok we have done some change to the code base, small ones but changes. Wich changes? I'm lazy and I do not remember all files we modified. No problem git will tell us. At least I remember which command I need to review the code base status: git status

Go to the git shell and run it:

git-client> git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working + directory) modified: lib/Range/Validator.pm modified: t/00-load.t no changes added to commit (use "git add" and/or "git commit -a")
Ah yes, we modified two files: not only the module also the t/00-load.t removing the -T from shebang line, thanks git and you are also so kind to give me two hints about what to do next: use "git add" and/or "git commit -a"

Go for the shorter path: we commit adding all files with git commit -a ie: we commit all files that are already tracked and eventually we remove from tracked list all files deleted in the code base. But we remember that committing needs to include a message as label of the commit: git commit -m "message" so putting all together and checking the status:

git-client> git commit -a -m "moved POD, removed -T" [master 49a0690] moved POD, removed -T 2 files changed, 20 insertions(+), 23 deletions(-) git-client> git status On branch master nothing to commit, working tree clean

step 4) pushing to github repository

Ok we submitted, well committed, all changes made. What's next? We have to synchronize the online repository that we named YourGithubLogin so check it and push modified content to it:

git-client>git remote -v YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +fetch) YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +push) git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': Counting objects: 7, done. Delta compression using up to 4 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (7/7), 870 bytes | 435.00 KiB/s, done. Total 7 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 3 local objects. To https://github.com/YourGithubLogin/Range-Validator 1788c12..49a0690 master -> master
Go to the browser and open the online repository to see what happened after the git push: in the main page, where files are listed we spot our two modified files with a new timestamp and with the message we used when committing. Under the Insight tab and then under Network in the right menu, we can see two points connected by a line segment: this is the visual history of our repository and each commit we have done: here you will find also eventual branches, but this is another story.

Well, another day is passed without writing a single line of perl code! At least for the moment our code is 100% bug free ;) I vaguely recall a chinese motto: "when you start something, start from the opposite" or something like that. To write a robust perl module start writing no perl code, for two days!

day three: finally some code

step 1) first lines of code

It's time to put some code inside our validate subroutine. We plan to accept both a string like '1..3,5' and a pure range like 1..5,6 but let's start with string form assuming only one element will be passed to our sub via @_

Remember what said in the foreword: this tutorial is not about coding, so be merciful with following examples.

sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } return @range; }
The above is straightforward (if ugly): we get something in via @_ (a string or a list) and we return something via return @range To accomplish this we initialize $range to hold our string.

A good principle in loops is "put exit conditions early" and following this principle we put our our die conditions as soon as possible, ie after the if/else check.

But we dont want to die with an ugly message like Died at ../Range/Validator.pm line x ie from the module perspective: we want to inform the user where his code provoked our module to die.

The core module Carp provides this kind of behaviour and we use its function croak that dies from perspective of the caller.

So we add the line needed to load the module, a first croak call if the string passed in contains forbidden characters and some other line too:

package Range::Validator; use 5.006; use strict; use warnings; use Carp; # --- new line our $VERSION = '0.01'; sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } # remove any space from string $range =~ s/\s+//g; # --- new line # die if invalid characters croak "invalid character passed in string [$range]!" if $range =~ /[^\s,.\d]/; # --- new line @range = eval ($range); # --- new line return @range; } 1;

step 2) testing on our own

How to see if all works as expected? Obviously with a test. Not 00-load.t but a new one dedicated to the validate sub. So go into the t folder and create a new file 01-validate.t and open it to edit the content.

Let's populate it with a basic content plus some new stuff (01-validate.t):

#!perl use 5.006; use strict; use warnings; use Test::More qw(no_plan); use Test::Exception; use_ok( 'Range::Validator' ); ok (scalar Range::Validator::validate('0..2') == 3, 'ok valid string produces correct number of elements' ); note ("starting test of forbidden characters in the string form"); dies_ok { Range::Validator::validate('xxxinvalidstringxxx') } "expected to die with invalid character";
First of all we used a different notation for Test::More ie. use Test::More qw(no_plan)

We are telling to the module we (still) have not a plan about how many tests will be in this file. This is a handy feature.

The Test::More core module offers us ok use_ok and note methods: view the module doc for more info about them.

But we also used in the above test the dies_ok function: this one comes from the CPAN module Test::Exception and we need to add this module in to our dependencies list.

Dependencies list? What is that? Where we spoke about this? Never, until now.

step 3) add dependencies in Makefile.PL

Infact the program module-starter used in day one created a file called Makefile.PL with the following default content:

use 5.006; use strict; use warnings; use ExtUtils::MakeMaker; WriteMakefile( NAME => 'Range::Validator', AUTHOR => q{MyName <MyName@cpan.org>}, VERSION_FROM => 'lib/Range/Validator.pm', ABSTRACT_FROM => 'lib/Range/Validator.pm', LICENSE => 'artistic_2', PL_FILES => {}, MIN_PERL_VERSION => '5.006', CONFIGURE_REQUIRES => { 'ExtUtils::MakeMaker' => '0', }, BUILD_REQUIRES => { 'Test::More' => '0', }, PREREQ_PM => { #'ABC' => '1.6', #'Foo::Bar::Module' => '5.0401', }, dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', }, clean => { FILES => 'Range-Validator-*' }, );
This file is run on the target system trying to install your module. It's vaste matter and you can find many, many useful informations in the core documentation of ExtUtils::MakeMaker and in the ExtUtils::MakeMaker::Tutorial and, as always in perl, there many ways to do it.

In our simple case we only need to know few facts about BUILD_REQUIRES and PREREQ_PM fields.

The first one lists into a hash all modules and their version needed to build up our module, building includes testing, so if you need some module during tests it is the place where insert dependencies. The module-starter program added 'Test::More' => '0' entry for us. This is the right place to state that we intend to use Test::Exception CPAN module during tests.

By other hand PREREQ_PM lists modules and their minimal versions needed to run your module. As you can see it's a different thing: to run Range::Validator you never need Test::Exception but, for example you 'll need Carp

Even if Carp it's a core module is a good practice to include it into PREREQ_PM

Read a very good post about dependencies Re: How to specify tests dependencies with Makefile.PL?

Cleaning example lines and given all the above, we will modify Makefile.PL as follow:

use 5.006; use strict; use warnings; use ExtUtils::MakeMaker; WriteMakefile( NAME => 'Range::Validator', AUTHOR => q{MyName <MyName@cpan.org>}, VERSION_FROM => 'lib/Range/Validator.pm', ABSTRACT_FROM => 'lib/Range/Validator.pm', LICENSE => 'artistic_2', PL_FILES => {}, MIN_PERL_VERSION => '5.006', CONFIGURE_REQUIRES => { 'ExtUtils::MakeMaker' => '0', }, BUILD_REQUIRES => { 'Test::More' => '0', 'Test::Exception' => '0', # --- new line }, PREREQ_PM => { 'Carp' => '0', # --- new line }, dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', }, clean => { FILES => 'Range-Validator-*' }, );
So the moral is: when you add a dependency needed to run your module or to test it remember to update Makefile.PL correspondent part.

step 4) run the new test

Ok, is the above test ok? It returns all we expect? Try it using prove -l but specifying also -v to be verbose and the filename of our new test (now we dont want all test run, just the one we are working on):

shell> prove -l -v ./t/01-validate.t ./t/01-validate.t .. ok 1 - use Range::Validator; ok 2 - ok valid string produces correct number of elements # starting test of forbidden characters in the string form ok 3 - expected to die with invalid character 1..3 ok All tests successful. Files=1, Tests=3, 0 wallclock secs ( 0.05 usr + 0.01 sys = 0.06 CPU +) Result: PASS

step 5) commit, add new files and push with git

What we need more from our first day of coding? To check our status and to synchronize our online repository (pay attention to the following commands because we have a new, untracked file!):

git-client> git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working + directory) modified: Makefile.PL modified: lib/Range/Validator.pm Untracked files: (use "git add <file>..." to include in what will be committed) t/01-validate.t no changes added to commit (use "git add" and/or "git commit -a") git-client> git commit -a -m "some code into validate and modified Mak +efile.PL" [master 580f628] some code into validate and modified Makefile.PL 2 files changed, 23 insertions(+), 3 deletions(-)
We committed before adding the new file! shame on us! Add the new file and issue another commit:

git-client> git add t/01-validate.t git-client> git status On branch master Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: t/01-validate.t git-client> git commit -a -m "added 01-validate.t" [master 5083ec3] added 01-validate.t 1 file changed, 16 insertions(+) create mode 100644 t/01-validate.t git-client> git status On branch master nothing to commit, working tree clean
What more? Ah! pushing to the online repository:
git-client> git remote -v YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +fetch) YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +push) git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': ********* Counting objects: 10, done. Delta compression using up to 4 threads. Compressing objects: 100% (8/8), done. Writing objects: 100% (10/10), 1.31 KiB | 447.00 KiB/s, done. Total 10 (delta 5), reused 0 (delta 0) remote: Resolving deltas: 100% (5/5), completed with 4 local objects. To https://github.com/YourGithubLogin/Range-Validator 49a0690..5083ec3 master -> master
What a day! We added six lines of code and an entire test file! Are we programming too much? Probably no but we are doing it in a robust way and we discovered it can be hard work. In perl hard work is justified only by (future) laziness and we are doing all these work because we are lazy and we do not want to waste our time when, in a month or a year, we need to take this code base again to enhance it or to debug it. So now it's time for the bed and for deserved colorful dreams.

continue..