Re^5: Developing a module, how do you do it ?
by BrowserUk (Patriarch) on Mar 02, 2012 at 22:19 UTC
|
Do these ideas make any sense?
To me, no.
- variable collision -- use scopes: { ... }; problem solved.
- Easier for someone who installs the module -- How so?
If the module is installed, and the tests are in the module, the tests are installed also. Even if they "installed" it by manually copying the file into the filesystem because -- as an recent post here had it -- it was the only method available to them.
- know in which file to look -- if it is all in one file, you know where to look.
- no need to search around) -- even once you've found the right .t file, you'll still need to locate the failing test.
And some of those .t file are huge and have hundreds of tests.
And very often the only clue you have is a test number, and working out which block of code relates to that test number means manually stepping through the file and counting. And that is often far from simple to do.
- and understand what 'ok' means without needing to think about how the testing algorithm works -- Sorry, but that is the biggest fallacy of them all.
When the result is "not ok", then you first have to locate the failing test (from its number), then you have to read the test code and try to understand what boolean test failed; then you have to try and understand what code in the actual module that failure represents; then you have to switch to the actual module and locate the code that failed; then you have to keep switching between the two to understand why; then you have to fix it.
And if that process is "easy", it probably means that it was a pointless test anyway.
- it enables to use Coverage tools -- Another fallacy.
Coverage tools are nothing more than a make-work, arse-covering exercise. I've seen people go to extraordinary steps to satisfy the coverage tools need to have every code path exercised, even when many of those codepaths are for exceptional conditions that are near impossible to fabricate because they are extremely rare ("exceptional") conditions. Hence you have a raft of Mock-this and Mock-that tools to "simulate" those failures.
But in the rare event that those types of failures -- disk failures, bus-failures; power outages etc. -- actually occur, there is not a damn thing your application can do about it anyway, beyond logging the error and dying. There is no value or purpose in testing that you do actually die in those situations for the sake of satisfying a bean-counting operation.
- there will be a test there to clearly indicate what didn't work and where. -- If only that were true.
It will tell you test number 271 failed. If you are lucky -- and you need to jump though more hoops to make it happen -- it might tell your in what .t file and where the failing test happened.
But it certainly won't tell you why it failed. Or where in the actual module the code that underlies that failure lives, much less why it failed. Was it the test? Was it the test environment? Was it the code being tested?
But *I* am the square peg in this. It seems that most people have, like you, bought into the cool-aid. I'm not here to tell you how you should do it; just to let you know that there are alternatives and let you reach your own conclusions about what fits best with your way of working. Good luck :)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] [d/l] |
|
|
I've seen people go to extraordinary steps to satisfy the coverage tools need to have every code path exercised, even when many of those codepaths are for exceptional conditions that are near impossible to fabricate because they are extremely rare ("exceptional") conditions. Hence you have a raft of Mock-this and Mock-that tools to "simulate" those failures.
I agree, but don't replace crazy with crazy.
Write tests that provide value. Use coverage to see if you've missed anything you care about. Think about what that coverage means (all it tells you is that your test suite somehow executed an expression, not that you tested it exhaustively).
If you're not getting value out of these activities, you're probably doing something wrong. That could mean fragile tests or tests for the wrong thing. That could mean that you have a problem with your design. That could mean too much coupling between tests and code or too little (and too much mocking).
There's no substitute for understanding what you're doing, so understand what you're doing.
(But don't resolve never to use a lathe or a drill press because you heard someone once used one somewhere for eeeeeevil.)
| [reply] |
|
|
There's no substitute for understanding what you're doing, so understand what you're doing.
(But don't resolve never to use a lathe or a drill press because you heard someone once used one somewhere for eeeeeevil.)
I utterly agree with both those statements.
Unfortunately, understanding takes time and practice and a few different projects and types of project, before the patterns from which understanding forms become evident. As a substitute, society tries to teach experience; but that is a very hard thing to do. So, you end up with guidelines that omit the reasoning, and so become unassailable dogmas. Hence, I received a recruiter circular a few months back that asked for "Perl programmer experienced in coding to PBP/PerlCritic standards."
(Really. Honestly. I just checked to see if it was still hanging around in my email somewhere but it isn't :( )
With respect to coverage tools. If a module is big enough that I need a computer program to tell me if I've covered it sufficiently with tests, it is big enough that it will be impossible for a human being to get a clear enough overview to be able to successfully maintain it. It is therefore too damn big.
But then, for any given problem I tend to write about 1/10 of the code that the average programmer seems to write. Mostly because of the code I don't write.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] |
|
|
And some of those .t file are huge and have hundreds of tests.
So the tests are "huge" yet you want to put them at the end of the module, and force perl to parse them each time the module is loaded?
It will tell you test number 271 failed. If you are lucky -- and you need to jump though more hoops to make it happen -- it might tell your in what .t file and where the failing test happened.
The Test::More/Test::Simple family of modules report a lot more than that on a test failure. At a minimum they report the test number, line number and filename where the test occurred.
The documentation for these testing modules strongly encourages you (see footnotes in "readmore" section below) to give each test a name. e.g.:
is($obj->name, "Bob", "object has correct name");
If that test fails, you get a report along the lines of:
ok 1 - object can be instantiated
not ok 2 - object has correct name
# Failed test 'object has correct name'
# at t/mytests.t line 12.
# got: 'Alice'
# expected: 'Bob'
ok 3 - object can be destroyed
This makes it pretty easy to see which test has failed, and why it's failed.
| [reply] [d/l] |
|
|
So the tests are "huge" yet you want to put them at the end of the module, and force perl to parse them each time the module is loaded?
Many of the modules on CPAN have huge .t files, because that's what the tools they use force them into writing. But I don't use tools that require me to write a dozen lines of test code to test one line of code.
And I guarantee that, even with the tests -- which could be Autoloaded if they became a drag on performance -- not one single module of mine takes 1/1000th of the time to load that your Reprove module takes. Not one.
... (just search for the name) ...
So now you've got to invent names for all your tests. Just so you can search for that name to find the test?
That is asinine make-work.
If I use die. It automatically "names" the test, with the file and line number.
In a single line format that my editor (and just about any other programmers editor worthy of the name) knows how to parse right out of the box.
And if I use Carp::cluck or Carp::confess, I get full trace-back, each line of which my (and any) editor knows how to parse.
And if I need to add extra trace to either the tests, or the code under test, in order to track down the route to the failure, I can add it temporarily, without needing to re-write half the test suite to accommodate that temporary trace.
Or I can use Devel::Trace to track the route to the failure; or the debugger; or Devel::Peek or ...
And if I need to pause the test at some point -- for example, so that I can attach a (C-level) debugger -- I can just stick a <STDIN> in there.
Ie. My test methodology allows me full access to all of the debugging tools and methods available. It doesn't force-fit me into a single one-size-fits-all methodology (ack/nack), whilst stealing my input and output, and denying me all the possibilities that entails.
My way is infinitely better.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] [d/l] [select] |
|
|
|
|
|
|
|
|
|
|
|
|
Thank you, this feels very interesting !
I kind of wish someone will join in, and say why they like and don't like '.t' files and coverage tools, to make it more "spicy" and understand why so many people use these.
It could just be a fashion, as you said, but i guess if people keep on using them it's because they like something in them. Or just because they got used to them, and keep on hearing so many people say how wonderful it is, and how it's going to get even better.
Like i said before, i am myself just very much starting to use testing : 4 months ago i didn't even know it existed, and since then i have mostly read (very positive comments) about it in several books, and not used it.
I am going to start now, and am trying to choose how to start. Your thoughts are like honey to me : thanks for this precious food!
Not that i think you are totally right : i will need to make my own opinion by experimenting...and my own way of developing.
But that you challenge many things i have learned, propose alternative paths, and make me think about how i want to relate to testing :
- what it will be for me ?
- should testing lead my development, or should my development use testing as a tool among others.
- testing coverage or not testing coverage? I guess, a little pinch of it can maybe help to not forget huge untested areas behind... I will have to try and feel what it does.
- testing or not testing ?
errrrrr..... not that one (^c^)
- spending hours imagining and creating many "edge cases" tests to feel safer, or creating them "on the go" when i feel i need to test something and/or debug ?
I'll get my own sensibility by trying, and by listening to people sharing theirs.
I would like to take my message and yours, and share them in "Perl meditations" or "Seekers of wisdom" again, to collect more impressions.
As the subject has shifted from developing in general, to testing alone. And now that this message "has been posted a 'long' time ago", less people might see it.
But i am also an inexperienced Perlmonk, so my ideas are naive (don't take into account the context a lot, as i don't know the context very well).
What do you think ?
| [reply] |
|
|
I kind of wish someone will join in, and say why they like and don't like '.t' files and coverage tools, to make it more "spicy" and understand why so many people use these.
I wish that too.
You may read any of many reasons for the lack of such responses here. Here are a couple of possibilities:
- I could be such a kook, that I'm not worth arguing with.
- I could be so good at arguing my case, no one is prepared to subject their prejudices, dogmas and half-baked reasoning to the cold, hard logic of my scrutiny.
You'll have to arrive at your own judgement on that.
What do you think ?
I think that you could pose a new question here, something like: "Do you use separate .t files for your tests? If so, why?". If you don't mention this thread or my handle, you might get more responses. I'd stay out of the thread at least until you invited my reaction there.
In the end, you'd have to try it both ways and live with the packages through a few maintenance cycles -- preferably carefully logging your experiences -- to reach some kind of definitive conclusions.
Even then they would be your conclusions and other would interpret the results differently. I've often see practices adopted by people because they are the done thing; or the latest greatest fad; that then become entrenched habits they will defend without needing rationality.
Indeed, I've done it myself in the past. It took a particular project where my way of working was closely monitored and questioned in fine detail by a third party -- it was used to form the basis of a set of working practices and guidelines for a whole huge project -- to make me question some of them in detail.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] |
|
|
|
|
|
Re^5: Developing a module, how do you do it ?
by tobyink (Canon) on Mar 02, 2012 at 19:14 UTC
|
PERL5LIB, "use lib" and "perl -I" all add paths to the front of @INC - thus a module specified in one of those will "win" over a module that's installed in one of Perl's default library directories.
| [reply] [d/l] |
|
|
| [reply] |