if the tests are written before the code, the test harness could generate the code
In which case you have to test that the harness can correctly generate the code. So you still have to write tests. Or else you have to write a code-generating test-harness that can bootstrap itself... at which point you should probably just call it a compiler.
Seriously, though, there is an advantage to keeping code independent of the tests. Think about the (not infrequent) case where the code is right and the test is wrong. Having two different implementations of the same idea of what behavior is correct is essential to flushing out bugs in either the code or the tests.
-xdg
Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.
| [reply] |
I don't know of anything out there that does this, though there could certainly be something I'm not aware of.
As for how useful it might be, I'd be pretty wary of trying to generate code from a test suite. While the tests might make it clear what the interface should be, they don't say anything about the underlying implementation. I would expect that the generated code would be so simple that it wouldn't be worth it. | [reply] |
I think this is a really interesting and novel idea, and I can't say that I've heard of it before from anyone else.
I think it also might just work.
Now obviously, as you say, there's only so much you can do.
But I don't see why you couldn't at least stub out the classes and methods you see mentioned, and get at least the package, strict, $VERSION, sub new, sub whateveryousee, and so on in place, based soley on what you see in the tests.
You wouldn't get a lot, but you might just get enough to make it interesting and worth while.
I say go for it.
Start with PPI for the parsing, and then build up a simple model of the classes based on what you see.
Then genenerate the basics for a class and write the files to an appropriately named file, in a position relative to the test file based on normal dist structure. | [reply] |
I would use the method names used for can_ok to generate the method stubs.
But there are several problems:
- what if you have typos (in method names) in the test
- you correct them, is it a new sub or is it possible to identify the misspelled sub in the generated code?
- ...
| [reply] [d/l] |
Does something like this already exist? If not, would anybody else find it useful?
You're talking about using Perl to write an ad-hoc programming language more suited to writing your tests than writing in Perl directly. After all, if you can describe your problem in a way that a computer can solve it, your description is a program, and the process that translates it is a compiler/interpreter.
This is essentially all test driven development does; one produces a (partial) definition of a simplified subset of program behaviour called "tests" (which is hopefully simple enough to be correct), and a second, authoritative definition of program behaviour, called "the program". If the two definitions are inconsistent, you have a problem; one of them is wrong (probably the program, but possibly the test suite).
Once you specify your tests so completely that they're your program, you've hopefully written a simpler program. Then the question becomes: can you write the tests in an even simpler, more obviously correct way using the new programming language you've just developed? It never ends...
| [reply] |
> I realise that methods exist to develop modules and tests from a class definition.
Are there some CPAN modules for that? Could you list them for me? Thanks.
-Jeff
| [reply] |
Thank you for all the comments.
I would expect that the generated code would be so simple that it wouldn't be worth it.
I find generated stubs pretty useful. They take care of all the preamble, the constructor, the deconstructor, the POD, the inheritance, and sometimes even basic argument validation.
Start with PPI for the parsing, and then build up a simple model of the classes based on what you see.
That's a good idea. I was initially thinking of writing something with Test::More's interface, which created a parse tree instead of validating tests. PPI would probably give me more flexibility, though.
It occurred to me also that all I need to do is create a Module::Starter-like script and execute that. Then all the style guidelines and formatting conventions get taken care of...
I would use the method names used for can_ok to generate the method stubs.
Yep. :-)
what if you have typos (in method names) in the test
Presumably you'll see them when you edit the generated code. If not, then there's little difference from typing them wrongly in the module file. I suppose, though, that if we saw can_ok called many times for one method, and once or twice for a typographically similar method, we could warn...
I realise that methods exist to develop modules and tests from a class definition.
Are there some CPAN modules for that? Could you list them for me? Thanks.
Not exactly, but modules like Class::Generate, Module::Starter (probably what you're looking for), Class::Std, and Object::InsideOut all help with automating this kind of thing.
I sometimes use a homegrown solution which builds hierarchies from definition files something like this:
Foo
can baz, speak, and glark.
kids must speak
uses Whuffle
Bar
has length of 10, breadth of 10, snickerer of only 0 or 1, and wei
+ght matching "^\d+".
can mutate returning Mutator, breed returning Breeder, and telepor
+t.
Glark
can yodel loudly or softly, sing.
inherits Traits::Charm.
This creates Foo::, Foo::Bar, and Foo::Glark all with constructors, preamble, and skeleton POD. All classes inherit from their parent unless you tell them otherwise. The "can" line specifies their methods, and the "has" their attributes. Accessors and mutators are automatically generated. "kids must speak" creates a Foo->speak stub method which complains loudly unless overridden by Foo::Bar and Foo::Glark. Foo::Bar->length defaults to '10', Foo::Bar->snickerer defaults to '0' and can only be changed to '1'. Foo::Bar->weight ensures that its argument starts with a number. There's more to it, but you get the picture. I prefer this approach because it lets me visualise my class hierarchy and brainstorm about who does what. My script creates the module files, generates a basic test suite, and more.
This is essentially all test driven development does; one produces a (partial) definition of a simplified subset of program behaviour called "tests" (which is hopefully simple enough to be correct), and a second, authoritative definition of program behaviour, called "the program". If the two definitions are inconsistent, you have a problem; one of them is wrong (probably the program, but possibly the test suite).
Once you specify your tests so completely that they're your program, you've hopefully written a simpler program. Then the question becomes: can you write the tests in an even simpler, more obviously correct way using the new programming language you've just developed? It never ends...
Heh! I'm not looking to write the program from the tests; just the outline of a module hierarchy. The "OO framework", if you like. I'm reasonably familiar with the complexities of compiler writing. ;-)
| [reply] [d/l] [select] |
How about a code generator which generates the code skeleton and the simplest tests?
You start with the project description:
{
class => 'Foo::Bar',
methods => [ 'x', 'y', 'z' ],
}
From this simple description you can generate:
- the directory tree (lib/Foo, lib/Foo/Bar, t, etc);
- a file called Bar.pm which the subs x, y and z;
- a test which tries to use the module, and check if it can do x, y and z.
Now you don't even have to write the tests!
| [reply] [d/l] |
Good idea: "invert, alway invert" as some old algebraist said. However, this is not the way it should be. That the tests are ok does not necessarily mean that the code is correct. To think so is similar to the argument, that the world is flat just because we can do some steps without returning immediately to the starting point.
But I completely agree with the idea that it would be nice to generate test stubs automatically (or method stubs from tests). Sometimes a piece of code requires a very intensive teste so that one easily forgets to test the rest.
Ok, this might be a sign
of bad design.
Or it means simply that the problem to be solved is not really trivial.
| [reply] |