perlmeditation
Tanktalus
<p>Between "[id://495240]" and [pg]'s [id://495246|response], it got me thinking - and it was complete agreement with [pg]. The QA guy is always right. Even the rare complete idiot QA guy is still pretty much right, even when s/he's wrong.</p>
<p>The QA guy's job pretty much is:
<ol>
<li>Start with the spec of the program.</li>
<li>Develop a test plan. (Ideally, the test plan is what drives the spec, so the developer has already done this - but even in those ideal scenarios, a little bit of black-box testing can still reveal holes.)</li>
<li>Run the tests.</li>
<li>Get all successful.</li>
</ol>
I'm not entirely clear where here the QA guy can go wrong. The spec isn't his. The test plan is an interpretation of the spec - if the test plan is wrong, most likely the spec wasn't clear, and we go back to the spec not being his. Running the tests - if any test the QA throws at your code isn't what the test plan says, we call it "ad hoc testing" and it's still right. Finally, if any test, whether planned or "ad hoc", fails, well, again, the QA guy can't be blamed for that, can he. Not his code.</p>
<p>Now, let's say he does something that he's not supposed to do. Is that his fault? Well, if he gets a reasonable error message, he just tested your boundary conditions. If he doesn't, then you're not handling your boundary conditions properly, and you need to address that. Ideally, these are part of the test plan. If not, they probably weren't in the spec anyway.</p>
<p>If the user interface leads him down a forbidden path, again, some ad hoc testing may find an unaddressed boundary condition.</p>
<p>At the very least, the QA guy is finding bugs before the customer/end-user. And that is always good. Customers/end-users aren't always the most intelligent, so if we can help them around the dumb problems by ensuring we get usable messages, that's a plus, too. Even when the QA guy is "wrong", he's still right - in helping to clean up the messages!</p>
<p>From my own most-recent experience going through a test cycle (which we're about to get into again within the next week or so), I have two other observations. First is that I love new hires. And co-op/IIP students. Basically, non-indoctrinated people. People who don't know what can't be done. Not for writing the test plan, but for executing it. When something doesn't look right, they don't know it isn't supposed to work, and will dutifully open a defect against it. I love it. They try some of the weirdest things - few of our customers will be that weird. So if these guys don't catch a bug, no one will. (Yes, we need some really advanced people testing, too, to catch the bugs that only our really advanced customers will find.)</p>
<p>Second is the principle of opening defects. We love it. The rule is that if you're not sure, open a defect and let the developer figure it out. Developers who take it out on the QA folks will get a stern reproach from their team lead as it was the team leads that came up with the policy. I'd rather get a questionable defect than no defect. It allows us to prioritise problems, even if that means we simply have to document a limitation (last resort). If you find something that is the same as someone else, fine. One of them will get returned as a duplicate of the other. That's not a big deal. It's better than neither person opening a defect, and shipping with the problem. (Well, we ship with the odd problem anyway, but at least it's better to have an <i>informed</i> bad decision than an <i>uninformed</i> bad decision.)</p>