Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Neither system testing nor user acceptance testing is the repeat of unit testing (OT)

by pg (Canon)
on Oct 23, 2005 at 17:46 UTC ( [id://502322]=perlmeditation: print w/replies, xml ) Need Help??

Last week we released a new version of our internal application. All went well, as the coordinator of the release, I had been busy with both techincal and management sides of the project for half year, but after all, the outcome was good, with some minor defects. However there is something worries me: some of the team members had the wrong perception of testing, including some senior members. I am worried that we are going to repeat the same mistakes with future releases, if this perception is not corrected.

It all started with this one program that displayed a few bugs (this was actually the only program that was buggy with this release). When I was doing post mortem, I determined that most of those bugs should be caught during unit testing, and unit testing was the best opportunity. For example, there was one dynamic query, part of it was formed like this: where a=band c=d (missing space between "b" and "and"). If this query was ever executed during testing, A SQL syntax error would pop up right the way, but it was never caught. I actually talked to the system testing guy, he told me now that when he was doing system testing, that particular program was barely working. Well, I hold the system testing guy responsible for not being tough and not telling me that truth during system testing, but I don't think he is respobsible for SQL syntax errors, as it was not required to go through each physical branch of the code during system testing, but unit testing does. However one senior person on the team disagreed with me, and her point was "this bug passed unit testing, system testing, and user acceptance testing, we were out of luck and there was really nothing we could do."

My first and biggest problem was that she obviously misunderstood the differences between those three testing phases. She had the perception that those three phases were meant to repeat the same testing, but with different people, so that we can reduce mistakes through excessive forces and effort.

That is not my view. To me, those three phases serve different purposes:

  • The three phases have different focuses, and you don't repeat the same test cases. With my releases, system testing mainly focuses on the fact whether all those programs agree with each other; and user acceptance testing mainly focuses on whether the programs satisfy user requirements. It is unit testing which really cares every corner of each program, not other phases.
  • System testing and user acceptance testing are basically black box testings. It is obvious that the users will not open up your programs, they don't care and they don't understand ;-). Although system testing was done by technical guys, they don't have the time to open up each piece of code, and I don't believe that they are supposed to do so. Unit testing is the only white box testing we do, and it is up to the individual programmer to make sure that each line of their programs are executed at least once, and each logical branch is went through at least once.

One release just went by, and the next one is about to come, in between I will focus on a campaign to fix people's perception about testing, so that future releases will be better!

  • Replies are listed 'Best First'.
    Re: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
    by dragonchild (Archbishop) on Oct 23, 2005 at 17:56 UTC
      Well written. ++!

      I have one nit to pick - unit-testing is still black-box testing. One should be testing against the published interface and verifying that the code performs according to spec. Of course, this assumes that one has both well-defined interfaces and a well-defined spec (including error-cases) to work off of.

      TDD has a very similar point of view, changing only that the unit-tests combined with the user stories are the spec. The system/integration testing and UAT phases are more verification of the user stories + validation that the user stories are correct. In the terms of the Pragmatic Programmer, TDD is tracer bullets on steroids, with UAT being the iteration.


      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

        Good point! But there are at least two methodologies for picking: white box and black box, this is especailly true with unit testing, and gets less true with system testing and user acceptance testing.

        For unit testing, most of the time, it is a mixture of white box testing and black box testing. They have different focuses. Black box testing is used to ensure that the interfaces a module/function/procedure provide are correct as spec'd. But the white box testing come in to make sure that each line of code has its reason to exist and if it exists it is correct.

        It also has something to do with our abilitity to define test cases. White box testing has more dependency on that ability, which just like any other ability human being possesses, is far from perfect ;-)

        I mostly agree with you, dragonchild, except for this small piece (which does not deprive you of your well deserved ++):

        unit-testing is still black-box testing

        The blackness of the boxes you test with, are orthogonal to the phase of testing you are at. Seriously, you can base your cases on the documented behavior (what the code is supposed to do) and also add cases to stress the inner workings of the code you're looking at.

        If you generalize, you'll see that you cannot insure you're achieving the desired level of test coverage, unless you peek under the hood and verify that the code is being excercised in the intended (or unintended, depending on who you ask) way.

        The point is that you must try to use both, black box and white box, as long as you can. As you progress in the scope of your tests (ie, you move from unit testing to system testing also called integration testing), the number of cases / paths makes the white box approach too resource intensive.

        Otherwise, you may have all the test cases you want, and still miss bugs because you did not see how the actual code did something in particular.

        But even when white box can help provide more effective test cases, this does not mean you can forget about black box testing. If you rewrite a substantial part of the code, chances are your white-box test cases become less effective, because they may now be tickling different code, or in different ways. But your black-box tests will still be verifying that at least, the interface is working as expected.

        Update: pg is right in the money (++ too): For unit testing, most of the time, it is a mixture of white box testing and black box testing (...)

        Best regards

        -lem, but some call me fokat

          Oooh. This is where I start to get a little antsy. In my extremely un-humble opinion, white-box testing is a CodeSmell. If you need to look at the code instead of the interface to determine what tests you need to run, your interface isn't correctly designed. Correct interface design and mocking will allow you to black-box all your unit-tests.

          That's a big statement, but I'm willing to be proven wrong. (Yes, that's a challenge.)


          My criteria for good software:
          1. Does it work?
          2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
        Hi pg and other fellow monks,

        Again as a representative/part of testing community I welcome your discussion on a topic which needs to be raised almost everywhere in this world who have employed testing proffesionals.

        After being in testing for 2 and a half years I feel very few people have a clear concept of what testing is.

        I have never crossed India so my comments would be based on what I have seen in Bangalore.

        1) Most of the people here will ask you "what tools you know?" if you +say you are a tester and not given a product how will you test someth +ing. 2) Most of the people here are confused with the testing terminology. +Regression testing which basically is a selective re-testing is misun +derstood as executing all the cases. 3) Many of the people I know put fake experience and enter MNC's there +by decreasing the quality of the product. 4) There are more Testing Institutes than S/w companies in India which + teaches Testing tools and claim as testing course. 5) Some companies make the developers to do system testing apart from +unit testing for lack of funds to hire a Testing guy. 6) Even MNC's are not clear about the position they offer .. Some comp +anies are confused between QA and Testing. 7) A QA engineer does testing in the current company and sometimes Tes +ting people are asked to look at the process which is a QA role. 8) If you join some indian students yahoo groups most of them would ha +ve a query from 10% of the people of the group asking whether to join + a testing course and the money they will get back. 9) Testing is more of an attitude game and not a profession either. 10) Every company should spend a day or two testing the products for t +hose scenarios which dont appear in the test plan/cases before User A +cceptance testing. 11) Rarely you can find a good quality tester in India because every t +ester wants to go to development feeling there is more money there an +d more travel opportunity. 12)I have seen many people skip cases. What is the use of writing a ca +se when people testing it skip. 13) I have also noticed people writing duplicated test case. 14) Test data is another important aspect in testing which most of the + companies constrain themselves if it costs a few $. 15) When the devoid testers become leads/mananger their management pri +nicples contribute to the degradation of the quality. 16) Testers here are rarely allowed to undergo the training what the d +eveloper undergoes which is not a good sign for the quality of the pr +oduct. 17) Freshers are hired and directly given test cases and are asked to +execute.No company nor these freshers worry that they should understa +nd the domain/technology before they get hands on something. 18) I am a tester by virtue and not by profession.Every other team mem +ber wants to jump to development or something else soon. 19) All interviews in india today for testing has one question in comm +on "why do you like testing and will you be in testing for a long tim +e ?" Gosh ! look at the poor state of my virtue.
        You people have seen more than me so would have better views , please excuse me if anything was wrong because I am naive :-D.

        Regards

        Prad

    Re: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
    by ady (Deacon) on Oct 24, 2005 at 05:29 UTC
      A strict unit test SHOULD have caught your bug here in code coverage:
      ...final important step is to analyze and maximize code coverage. We recommend that you implement a dynamic coding standard requiring that all code must have 100% line coverage upon check in to source control.

      but of course still with 100%CC, bugs may slip through due to the Quis Custodiet Ipsos Custodes (Who watches the watchmen?) problem of writing test software....

      The issue of boxing/unboxing is not black/white, there is a spectrum of "grey goo" in between...

      / allan
    Re: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
    by rir (Vicar) on Oct 24, 2005 at 04:24 UTC
      it is up to the individual programmer to make sure that each line ... of their programs [is tested]

      That sounds like a problem: people are auditing their own work.

      Given that you say things have gone pretty well I don't know that an educational campaign is warranted; teaching people extraneous stuff is a great time-sink. Improving test coverage may be more effective.

      I would start with the proximate causes: How can embedded SQL be tested? What does the tester need to be able to report as a flaw the behavior he saw? That a senior team member seems to need to feel blameless is a more sensitive thing to handle but if the problem was not under her jurisdiction it is not germaine. That she is muddled about the various test phases may also be beside the point.

      Be well, rir

    Re: Neither system testing nor user acceptance testing is the repeat of unit testing (OT)
    by adrianh (Chancellor) on Oct 25, 2005 at 09:12 UTC
      She had the perception that those three phases were meant to repeat the same testing, but with different people, so that we can reduce mistakes through excessive forces and effort.

      I just want to pick on calling unit/system/acceptance testing phases.

      They're different kinds of testing, but I think treating them as phases (first do all the unit testing, then do all the system/integration testing, then do all the user acceptance testing) can lead to many nasty situations.

      For example, leaving all the user acceptance testing to the end means that if you've misinterpreted the requirements and have working code that doesn't meet the users needs, then you're going to be throwing a lot of work away.

      In my experience it's much better to, as much as possible, do all three at the same time. Build your development process around small end-to-end increments of the project so you can be continually doing system and acceptance testing as you go.

    Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Domain Nodelet?
    Node Status?
    node history
    Node Type: perlmeditation [id://502322]
    Approved by ww
    help
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this?Last hourOther CB clients
    Other Users?
    Others lurking in the Monastery: (4)
    As of 2024-03-28 20:46 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?

      No recent polls found