in reply to document your test files
I don't actually think that this would be useful or particularly desirable. The output of the tests should be useful to the author to diagnose the cause of a test failure on someone elses system (after all they wouldn't have released with failing tests, would they?) The output also most certainly should give some clear indication where a failure is caused by some problem external to the module code itself (such as an inability to access a database or network service) and which it would be expected the person installing the module can fix. Likewise skipped tests should output a clear reason why they were skipped, and TODO tests probably should be commented for the benefit of someone implementing the missing functionality (and possibly to help explain "unexpectedly passed" tests.) I might even advocate the heavy commenting of text fixtures where particulary hairy or non-portable code is being used to test some feature. Though of course code in tests should be as simple as possible in order to avoid bugs in the tests themselves - I would say that a significant proportion of test failures are due to faulty assumptions in the tests themselves.
Given all of the above I don't see the point in making formal user level documentation for the tests, and by my standards I would consider it Bad Practice to be taking a test suite as some kind of additional documentation or example of a modules usage: it is likely that undocumented or purely internal functionality will be tested in order to get better granularity, and it also likely that, in a certain class of application, the tests will need to do things that are unlikely to be necessary in real-life usage.
If the documentation and/or examples of a module are weak then this is what should be fixed. Providing user level documentation for the tests simply adds an otherwise unnecessary burden on the author.
/J\
|
|---|