Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris

Re^2: Writing a Test Harness

by elTriberium (Friar)
on Mar 31, 2011 at 00:06 UTC ( #896521=note: print w/replies, xml ) Need Help??

in reply to Re: Writing a Test Harness
in thread Writing a Test Harness

Any explanation why you think it looks "like a cluster-f*** in the making"? I'm fine with other opinions, but some explanation would be nice. I understand that keeping it simple is a good idea, but I don't see why the modules I mentioned above (which are all CPAN modules used in many projects) are particularly complicated.

Just to explain some thoughts:

  • I want to use some of the more advanced testing options that are in Test::Most. I don't see how that's worse to using Test::More.
  • There need to be some configuration files, that's why I'm thinking about using YAML, a good, easy to read and write format that translates well into Perl scalars, arrays and hashes.
  • I want to use Moose to actually make the design a bit simpler, by breaking the code down into several classes.
  • Expect (or something similar) will be needed for this project, as it requires to run many commands on different clients and servers.

Replies are listed 'Best First'.
Re^3: Writing a Test Harness
by chrestomanci (Priest) on Mar 31, 2011 at 14:43 UTC

    I don't know what educated_foo was thinking when he wrote that your plan looks like a cluster-f*** in the making, but I agree, and these are my reasons.

    I think your are conflating several sorts of testing and monitoring, and trying to coerce the various Test::* modules on CPAN to do things they are not designed for, such as testing your deployment or monitoring.

    Firstly, the testing modules such as Test::Most are mostly there to test the correctness of your low level code. (Known as API testing) For example suppose in the source code for your project you have an add_numbers() subroutine, then somewhere in the corresponding test suite you ought to have:

    use Test::Most; is( add_numbers(2, 2), 4, 'Correctly added two numbers' ); throws_ok( sub{add_numbers(2, 'string')}, qr/not a number/, 'Error wit +h non number');

    If you write lots of API tests in your test suite, and you use a coverage tool such as Devel::Cover to make sure that all your source code is checked by tests, then you can be reasonably confident that the source code is correct and has no bugs, or at least that any remaining bugs will be at a more conceptual level and are reflected in the design of the tests. (For example if the specification is unclear, then it is likely that you will get both code and tests that agree on the wrong thing.)

    In general it is a mistake to set-up complex situations such as running commands through ssh sessions on other machines unless that is totally unavoidable. Remember you are testing the correctness of the code, so you want to only test one thing at once, and you want your tests to be simple so that when they fail it is easy to work out what API is broken.

    Take the example of the add_numbers() subroutine. You might expose it though at SOAP API, so you could use SOAP to test it, but if that test failed, you would not know if it was SOAP that was broken in some way, or the API itself, so it is much better to create two simpler tests that test the two separately.

    Secondly there is system level testing and benchmarking. For application programs (think Microsoft Excel), this is usually done by humans running the complete program and checking that it all works, nothing breaks or looks strange, and the performance is OK. For some types of programs (eg web applications you can automate some of the testing with tools like Catalyst::Test, but it is important to remember that automated testing will only test the things you can think of. Sometimes it takes human randomness to uncover things that the programmer did not think of, or user interface disasters that make perfect sense to a programmer but are impossibly confusing to an end user.

    You comment about running tests through SSH sessions hints that your are looking to test the deployed application. If that is the case, then you should probably be looking into monitoring tools such as Zabbix or Nagios.

      OK, there might have been a misunderstanding here. And that was my fault, because I didn't specify my requirements and background in detail. I will give some additional details, but I will have to limit it to a high-level overview:

      The plan is to use a Perl-based test harness to support the QA and DEV groups of a large company with automated tests. There's something like this in place already, but it has limitations, which is why we're redesigning parts of it. In our environment we have a server running our product and multiple clients accessing this server. This is why we're already using Expect and will most likely also do so in the future. The server itself is not written in Perl, so we're not trying to test Perl code, but the different features of the server instead. This will include functional, error recovery, stress and other tests.

      I don't want to go into all the details, because this will become too much to post here. Which doesn't mean that we're not looking into those details ourselves. Here I just wanted to see if others know about CPAN modules I should start looking at, which I might have missed so far. There were a few good tips already, but more are certainly appreciated.

Re^3: Writing a Test Harness
by educated_foo (Vicar) on Mar 31, 2011 at 02:31 UTC
    It seems to me that you're best off with the simplest testing setup possible. However, you seem to be reaching for a bunch of "cool" modules. It makes me wonder what kind of sophisticated testing framework you will use to test your sophisticated testing framework.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://896521]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others chilling in the Monastery: (1)
As of 2023-05-28 04:14 GMT
Find Nodes?
    Voting Booth?

    No recent polls found