in reply to Re^2: Testing A Stand Alone Application
in thread Testing A Stand Alone Application

My point was that in the code you've posted, there is nothing to test, so it's hard to demonstrate any mechanism for testing it.

The example I provide in my post is supposed to be the interface I'll use...

Interface to what? So far, the code does nothing, and you've provided no indication of what it should do. To write tests first, you have to have to know what the end point (or an intermediate point) is going to be, so that you can construct a test to verify that when yu write the code, you have achieved it.

For example, you've indicated that you'll be processing XML files some how. But how?

You (we) gotta know what you are aiming for before you can construct a test that will check your code, when you write it, achieves that.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re^3: Testing A Stand Alone Application

Replies are listed 'Best First'.
Re^4: Testing A Stand Alone Application
by est (Acolyte) on Mar 13, 2008 at 00:53 UTC
    Ahhh fairynuff, I should have provided more information about the interfaces... okay, there you go:
    • get_xml_files
      1. input: a (exist) directory
      2. objective: glob all xml files in the directory
      3. output: a list of filename.xml or empty list
    • extract_file
      1. input: a list of xml files
      2. objective:
        1. parse the xml file (using XML::Simple)
        2. find a specific tag, e.g. IMPRESSIONS
      3. output: write to STDOUT the filename.xml and the IMPRESSIONS found

    I think that should answer your question about the interface? I know that functionality can be done in one line by
        grep IMPRESSIONS *.xml
    But again it's only an example...

      Then, off the top of my head there are perhaps four things to test.

      1. Does it do the right thing when no .xml files are found?
      2. Does it do the right thing if a .xml file is found that fails to parse as XML?
      3. Does it do the right thing if the file contains XML, but none of the tags you are looking for?
      4. Does it do the right thing--produce the appropriate output in the appropriate form--when the file is found, contains XML and the required tags?

      A test script (not using Test::*) might look something like:

      #! perl -slw use strict; use constant DIR => '/path/to/dir/'; ## temporarially rename the test files rename $_, $_ . 'X' for glob DIR . '*.xml'; ## And compare the output with a reference file ## containing the expected output for the no xml case. system 'perl.exe thescript.pl > noxml.out && diff noxml.out noxml.ref' +; ## Get the xml files back again. for my $file ( glob DIR . '*.xmlX' ) { my $new = $file; chop $new; rename $file, $new; } ## And test the other three cases by diffing the actual output ## produced by processing 3 test files constructed to demonstrate them ## Against a file containing the expected output. system 'perl.exe thescript.pl > xml.out && diff xml.out xmp.ref';

      Initially, you'll be verifying your output manually. But then you redirect the validated output to a file and it becomes the reference for future tests. Use Carp to give you feedback on where things went wrong.

      If you add temporary/conditional tracing to track down problems, they do not prevent the test from verifying those bits that worked.

      Run the test script from within a programmable editor and you can use the traceback provided by Perl to take you straight to the site of failing code.

      As you think of new criteria to test, you construct a new, small .xml file to exercise each criteria, and the second run (system) above will run them automatically. So, your tests consist of a 10 line script you reuse, and a short .xml file for each criteria.

      Or you could do it the hard way.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Hi BrowserUk,

        Now, that gives me an idea on how to test :)
        Thanks for that!

        Anyhow, your suggestion seem to test the entire application, i.e. a black box testing, where we pass an input to the application and diff the output. But I can't test the output from each subroutine...

        Am I right in thinking that the only way to test each individual sub would be to create a module eventhough if the code is not re-usable by any other scripts (as suggested by Thilosophy)?

        Thanks!