in reply to Re: Testing A Stand Alone Application
in thread Testing A Stand Alone Application

Hi BrowserUk,

Thanks for the comment...
As you have mentioned, my aim is to promote a reply on how to test a stand alone script , i.e. not to test the example I provide in my post.

And what I intend to do is to get used to write the test first before the actuals coding, as what is suggested by PBP chapter 18.1:
    So write the tests first. Write them as soon as you know what your interface will be.
Write them before you start coding your application or module.

The impression I get from your comment
    But in the absence of any real code, it is pretty much impossible to suggest good ways of testing.
is that you need to code first, so you know what to test later.

The example I provide in my post is supposed to be the interface I'll use...


Thanks.
  • Comment on Re^2: Testing A Stand Alone Application

Replies are listed 'Best First'.
Re^3: Testing A Stand Alone Application
by BrowserUk (Patriarch) on Mar 13, 2008 at 00:31 UTC

    My point was that in the code you've posted, there is nothing to test, so it's hard to demonstrate any mechanism for testing it.

    The example I provide in my post is supposed to be the interface I'll use...

    Interface to what? So far, the code does nothing, and you've provided no indication of what it should do. To write tests first, you have to have to know what the end point (or an intermediate point) is going to be, so that you can construct a test to verify that when yu write the code, you have achieved it.

    For example, you've indicated that you'll be processing XML files some how. But how?

    • Will you be generating structure?

      If so, you might verify that structure.

      Maybe, by feeding in a known XML and hard coding a structural equivalent to verify against.

    • Will you be transforming the input to some output? A different XML? CSV?

      If so, you might test by feeding in a known XML and comparing the result against a hand-coded output.

    You (we) gotta know what you are aiming for before you can construct a test that will check your code, when you write it, achieves that.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Ahhh fairynuff, I should have provided more information about the interfaces... okay, there you go:
      • get_xml_files
        1. input: a (exist) directory
        2. objective: glob all xml files in the directory
        3. output: a list of filename.xml or empty list
      • extract_file
        1. input: a list of xml files
        2. objective:
          1. parse the xml file (using XML::Simple)
          2. find a specific tag, e.g. IMPRESSIONS
        3. output: write to STDOUT the filename.xml and the IMPRESSIONS found

      I think that should answer your question about the interface? I know that functionality can be done in one line by
          grep IMPRESSIONS *.xml
      But again it's only an example...

        Then, off the top of my head there are perhaps four things to test.

        1. Does it do the right thing when no .xml files are found?
        2. Does it do the right thing if a .xml file is found that fails to parse as XML?
        3. Does it do the right thing if the file contains XML, but none of the tags you are looking for?
        4. Does it do the right thing--produce the appropriate output in the appropriate form--when the file is found, contains XML and the required tags?

        A test script (not using Test::*) might look something like:

        #! perl -slw use strict; use constant DIR => '/path/to/dir/'; ## temporarially rename the test files rename $_, $_ . 'X' for glob DIR . '*.xml'; ## And compare the output with a reference file ## containing the expected output for the no xml case. system 'perl.exe thescript.pl > noxml.out && diff noxml.out noxml.ref' +; ## Get the xml files back again. for my $file ( glob DIR . '*.xmlX' ) { my $new = $file; chop $new; rename $file, $new; } ## And test the other three cases by diffing the actual output ## produced by processing 3 test files constructed to demonstrate them ## Against a file containing the expected output. system 'perl.exe thescript.pl > xml.out && diff xml.out xmp.ref';

        Initially, you'll be verifying your output manually. But then you redirect the validated output to a file and it becomes the reference for future tests. Use Carp to give you feedback on where things went wrong.

        If you add temporary/conditional tracing to track down problems, they do not prevent the test from verifying those bits that worked.

        Run the test script from within a programmable editor and you can use the traceback provided by Perl to take you straight to the site of failing code.

        As you think of new criteria to test, you construct a new, small .xml file to exercise each criteria, and the second run (system) above will run them automatically. So, your tests consist of a 10 line script you reuse, and a short .xml file for each criteria.

        Or you could do it the hard way.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.