|laziness, impatience, and hubris|
Automate and dispatch unit test runs across perl/berrybrew on remote Windows and Unix systemsby stevieb (Canon)
|on Apr 27, 2016 at 17:17 UTC||Need Help??|
My Test::BrewBuild test deployment system is now reasonably stable, and has the ability to dispatch test runs to remote test servers. This means with a single command, you can run your unit tests across any number of perl instances on any Operating Systems automatically, and get the results delivered back to you.
To do this, we use the bbdispatch to send brewbuild commands to previously configured remote testers. To start a bbtester, log on to the system and run bbtester start. That'll start the tester and put it in the background.
Now's probably a good time to state that Git is required to be installed on all systems used for network testing, and one should peruse the basic system setup doc.
In these examples, I have three testers set up. tester1 is the DNS name of a Ubuntu Linux system running perlbrew, tester2 is a Windows 2008 Server running berrybrew, and the localhost is a FreeBSD server, again running perlbrew.
There are three flags for bbdispatch:
Note that you can alternatively use a config file to store the dispatcher information.
Here's the most basic example. We're already in a repository working directory so we can omit -r, we're only working on a tester on localhost, and we'll just use the default brewbuild test run:
Things get more useful when you have multiple testers across multiple different Operating Systems.
This example does a basic run using the same repo as above, but this time I'm explicitly setting it. I'm also dispatching to three tester systems:
Below, we use the -c flag to tell brewbuild that we want to perform test on the current module (I'm back in the repo dir again so I omit -r), and then have brewbuild run unit tests of all the module's reverse dependencies (-R) to ensure our proposed updated module doesn't break down-river (ie. modules that require/use your module) modules.
Notice that some results are FAIL. In this case, we create a bblog directory in the directory you're working in, and generate log files for each individual fail. This allows you to see what broke and where, without having to go to each individual system. You can then update/fix code, then run another dispatch. Here's an example of the files that were generated by the above run:
The log files contain all errors that the tester would have produced to STDOUT and STDERR, with the cpanm build logs appended.
On a normal dispatch run (without running revdep), the log file would have appeared as tester1_5.10.1_32-FAIL.bblog. Running brewbuild in standalone mode (no dispatching): 5.10.1_32-FAIL.bblog. You can optionally save even the PASS logs if you choose by using the -S, --save flag to brewbuild.