in reply to Perl CPAN test metadata
Erm, the AUTOMATED_TESTING environment variable being set is me the CPAN Tester letting you the CPAN author know that the environment that is being tested under is an automated smoke tester (ie. no human present). It has no other utility implied or otherwise
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Perl CPAN test metadata
by eyepopslikeamosquito (Archbishop) on Aug 23, 2010 at 04:18 UTC | |
Yes, I know the purpose of AUTOMATED_TESTING, so my post was not as clear as it should have been. Let me clarify the point I'm trying to make with a specific example. Suppose, as a CPAN author, I've written a long-running stress test, t/stress.t say, for my distribution. With AUTOMATED_TESTING, the test writer is expected to write some (imperative) code in t/stress.t to check for the AUTOMATED_TESTING (and possibly other) environment variable/s and skip the test if this variable is not set. Instead of asking the test to check its runtime environment, I'm proposing that the test tool/s check the test metadata. In this example, the t/stress.t test (declaratively) states, via test metadata, that it is a long-running stress test. Armed with this metadata, the CPAN tool chain can hopefully "do the right thing": "make test" run by a human would skip the test, while automated smoke testers would cheerfully run it. While the details are yet to be fleshed out, I find this approach attractive for three reasons: Update: If you squint, you'll see that the xt/ sub-directory (for "extra" tests) is just a special case of my more general proposal; that is, placing a test in the xt/ sub-directory states (declaratively) that this test has metadata of being an "extra test" (where I guess "extra test" here means "do not run via 'make test' action"). | [reply] [d/l] [select] |
by Anonymous Monk on Aug 23, 2010 at 06:09 UTC | |
| [reply] |
by eyepopslikeamosquito (Archbishop) on Aug 23, 2010 at 07:12 UTC | |
I'm contemplating the general problem of how best to classify and organize tests. As a specific example, you might classify and organize your tests like this: An obvious limitation of such a scheme is that a test cannot simultaneously be in two different categories. For example, where should I place an rtbug test that is also a stress test? And a "stress" test may or may not be "long running" - what if I want to run all stress tests, but not the "long running" ones? Arguably, the short running stress tests should be in t/ and not xt/, so that they are run by "make test". Property-like metadata solves this problem, akin to adding "tags" to your photo collection. Tools like prove could be enhanced to allow you to specify boolean combinations of test metadata that you want to run. Maybe what I'm proposing is YAGNI. It seems I'll need to come up with some specific, compelling use cases before I could hope to gain any support for this idea. Update: Note that the xt/ directory layout recommended by Module::Install::ExtraTests is:
| [reply] [d/l] [select] |
by MidLifeXis (Monsignor) on May 17, 2011 at 13:52 UTC | |
I have a few thoughts on this, perhaps unrelated to each other. AUTOMATED_TESTING is not just limited to Perl. It is used by many test frameworks to indicate an automated testing environment -- some know how to read other metadata files, some do not. Your proposal suggests a method by which the test can communicate to the harness that the test belongs in a certain environment. AUTOMATED_TESTING is a method by which the harness communicates with the test that it is running in a certain environment. Two very different things. A single test in a test file (rather than the entire set of tests) may need to be skipped if in an AUTOMATED_TESTING environment. Under your proposal, I would need to split this test (along with any initialization, setup, and other concerns) into another file. --MidLifeXis | [reply] |