why not just create an issue for adding a test, write it and add it to the automated test suite, and then resolve the issue?
Because we track the entire product, including its documentation, electronics, mechanics, pneumatic (if any). There are tests that can not be automated, at least not for a sane price. One of our current projects has a container for some liquid, with a level sensor and a tilt sensor. The container has a manually operated draining valve and - for development and testing - a simple funnel on the top side. (A little bit like the fuel tank on old motor bikes, but for a completely different purpose.) I don't know the exact test spec yet, but I bet you will find instructions like these in the test plans:
- ...
- Fill ten volume units of liquid into the container
- Wait three time units
- Check the if the volume displayed by the device is between 9.5 and 10.5 units.
- Tilt the container by five angle units.
- Wait one time unit
- Check if the device issued a tilt alarm
- Return the container to level
- Wait one time unit
- Check that the alarm has turned off
- Drain five volume units out of the container
- Wait one time unit
- Check that the device issued a leakage alarm
- ...
Yes, these are very detailed baby steps of actually using (or misusing) the device under test. Using your fingers, ears, eyes, and a little bit of that grey filling material between the ears. Pretend to be a smart or dumb user.
Yes, you could automate that, using a machine three times as complex as the device under test.
Or just tell an intern to open the test plan #47893 in a browser and to follow the instructions.
These tests are usually executed and documented ONCE for the entire device, before handing it over to the client. The test instructions and results are part of the documentation that is handed to the client. Maybe after year or two, hardware and/or software are modified to better match the client's needs, tests are modified to match the new requirements, and then, those tests are run ONCE to confirm that all requirements are still fulfilled.
So even just thinking about automating them is way too expensive. Interns are way cheaper than automating those tests. Even if they completely f*** up a device under test.
Another part of the tests is simply reading the source code (or schematics, layouts, hardware drawings). Compiler warnings are great, and lint find a lot of extra mess-ups, but having another developer look at the code (or schematics, plans) can find a lot of those nasty edge cases everybody hates to debug.
Alexander
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
| [reply] |
Ok, so not software tests. Well, then that goes to my point about the wiki- each of these platforms has a version controlled wiki and you could easily set up a standard location for all real-world testing plans to be documented. If you want them in the same repo as the code, then just designate a subdirectory for it and write them in Markdown, which renders nicely and is easy to edit from the web interface.
| [reply] |
[...] version controlled wiki [...] for all real-world testing plans to be documented [...]
It's not just documenting the tests. It's also about traceability. You don't write tests like a poem, "inspired" by the requirements. Each and every requirement needs to have at least test. "Test" may be a lab test, software reading, datasheet reading (e.g. if a requirement demands UL-listed parts). In the end, you end with a lot of tests, each test verifies at least one requirement. You have test plans, grouping tests reasonably (i.e. you don't mix lab tests and software reading). And you have test executions, documenting that a test plan was executed partially or completely, including the test results. All of that can be traced back to requirements.
Yes, it can be done in a wiki or in Excel. We did it in Excel. It sucked. Really. Starting with the fact that Excel documents can only be edited on a single machine at a time. A wiki would have improved that, but still, you would have to do all tracing manually. In that regard, Jira + R4J + T4J is a huge improvement. Three people executing different test plans in parallel is no problem, checking the traces required just a few mouse clicks instead of hours of clicking through MS Office documents. And if you reach 100% test execution, a few more mouse clicks export documents listing each test, its execution(s), and the links to the requirements. That can be added to the project documentation, making the client's auditors happy. And ours, because we could trace any single test result back to the requirements directly in the web browser. It really impresses auditors.
(And no, we don't do that just to make auditors happy. It's a nice side effect.)
Alexander
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
| [reply] |
add it to the automated test suite
As afoken already said, in big real world applications, you often can only automate a small amount of tests.
For my point-of-sales system, about 50% of the issues in the field are usability issues of the UI. And yes, that includes things like "an old lady refusing to wear her glasses and complaining that the text is too small" (so we added scalability to the main UI mask texts and buttons), touch screen gestures getting misinterpreted because the user did do them in the laziest way possible...
A lot of UI stuff is very subjective and the results vary from customer to customer and from user to user and from device to device.
Yes, one may (and should) write tests for some of those cases, when it is the best financial choice. But in many cases, a trained eye can catch errors in seconds or minute for which you would have to spend weeks writing test software. And you would need to keep spending weeks writing updates to the software for every small UI change, instead of going through a 10 minute checklist manually...
| [reply] |