Re^4: Developing a module, how do you do it ?
by mascip (Pilgrim) on Mar 02, 2012 at 16:08 UTC
|
Beginner question (i searched vainly for this basic information) :
What happens if i install a module (make install), and then want to use its uninstalled development version?
Is this when i should use perl -I ? Will @INC react as i wish it will?
Thanks for the tips on tests BrowserUk, really good !
I will do it your way for now. And it's always possible to .t files if i ever want to.
Here are a few reasons why i think it could be worth the try:
- they are not all in the same file, so i don't risk to have variables 'collision' (same variable in two unrelated tests).
- it is easier for someone who installs the module (or even just myself after 6 months) to understand what happens, know in which file to look (and each of them are a lot smaller - no need to search around), and understand what 'ok' means without needing to think about how the testing algorithm works.
- it enables to use Coverage tools. If i understood properly what they do, thanks to them you will be less likely to forget to test any part of your code, which will result in easier maintenance.
Especially when you get this nasty bug in a part of your code that you didn't look at for 6 months
=> there will be a test there to clearly indicate what didn't work and where. You might have forgotten to put this test there if you hadn't use test coverage.
Thus, testing with .t files makes things easy for others and for yourself (in the future), and it doesn't seem to be so much work to set them up : just paste a piece of test code into a .t file and add some ok() statements... and you will have to use 'prove' quite often, which you could automate with an alias every time you run your code.
Well, it is quite some work to create these .t files, and the more you will create, the more you will think of adding ("and what if... ok, let's add a test). Which ends up in more Testing, and thus more time, less efficiency. I guess this is what testing is about : a bit more slowly, but surely. And thus you will gain time in the future.
Do these ideas make any sense?
Having said this, i am starting a project for which i don't have the time to learn and do all of this right now, so i will go with your method. Later i might try and make this project "safer". I should.
Thanks for the advice on how to use PERL5LIB, lib and perl -I, too. It's exactly what i needed to understand =o) Now i'll need to experience it. I guess i'll come back to read your message when i'm doubting about which method is more appropriate.
| [reply] |
|
|
Do these ideas make any sense?
To me, no.
- variable collision -- use scopes: { ... }; problem solved.
- Easier for someone who installs the module -- How so?
If the module is installed, and the tests are in the module, the tests are installed also. Even if they "installed" it by manually copying the file into the filesystem because -- as an recent post here had it -- it was the only method available to them.
- know in which file to look -- if it is all in one file, you know where to look.
- no need to search around) -- even once you've found the right .t file, you'll still need to locate the failing test.
And some of those .t file are huge and have hundreds of tests.
And very often the only clue you have is a test number, and working out which block of code relates to that test number means manually stepping through the file and counting. And that is often far from simple to do.
- and understand what 'ok' means without needing to think about how the testing algorithm works -- Sorry, but that is the biggest fallacy of them all.
When the result is "not ok", then you first have to locate the failing test (from its number), then you have to read the test code and try to understand what boolean test failed; then you have to try and understand what code in the actual module that failure represents; then you have to switch to the actual module and locate the code that failed; then you have to keep switching between the two to understand why; then you have to fix it.
And if that process is "easy", it probably means that it was a pointless test anyway.
- it enables to use Coverage tools -- Another fallacy.
Coverage tools are nothing more than a make-work, arse-covering exercise. I've seen people go to extraordinary steps to satisfy the coverage tools need to have every code path exercised, even when many of those codepaths are for exceptional conditions that are near impossible to fabricate because they are extremely rare ("exceptional") conditions. Hence you have a raft of Mock-this and Mock-that tools to "simulate" those failures.
But in the rare event that those types of failures -- disk failures, bus-failures; power outages etc. -- actually occur, there is not a damn thing your application can do about it anyway, beyond logging the error and dying. There is no value or purpose in testing that you do actually die in those situations for the sake of satisfying a bean-counting operation.
- there will be a test there to clearly indicate what didn't work and where. -- If only that were true.
It will tell you test number 271 failed. If you are lucky -- and you need to jump though more hoops to make it happen -- it might tell your in what .t file and where the failing test happened.
But it certainly won't tell you why it failed. Or where in the actual module the code that underlies that failure lives, much less why it failed. Was it the test? Was it the test environment? Was it the code being tested?
But *I* am the square peg in this. It seems that most people have, like you, bought into the cool-aid. I'm not here to tell you how you should do it; just to let you know that there are alternatives and let you reach your own conclusions about what fits best with your way of working. Good luck :)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] [d/l] |
|
|
I've seen people go to extraordinary steps to satisfy the coverage tools need to have every code path exercised, even when many of those codepaths are for exceptional conditions that are near impossible to fabricate because they are extremely rare ("exceptional") conditions. Hence you have a raft of Mock-this and Mock-that tools to "simulate" those failures.
I agree, but don't replace crazy with crazy.
Write tests that provide value. Use coverage to see if you've missed anything you care about. Think about what that coverage means (all it tells you is that your test suite somehow executed an expression, not that you tested it exhaustively).
If you're not getting value out of these activities, you're probably doing something wrong. That could mean fragile tests or tests for the wrong thing. That could mean that you have a problem with your design. That could mean too much coupling between tests and code or too little (and too much mocking).
There's no substitute for understanding what you're doing, so understand what you're doing.
(But don't resolve never to use a lathe or a drill press because you heard someone once used one somewhere for eeeeeevil.)
| [reply] |
|
|
|
|
And some of those .t file are huge and have hundreds of tests.
So the tests are "huge" yet you want to put them at the end of the module, and force perl to parse them each time the module is loaded?
It will tell you test number 271 failed. If you are lucky -- and you need to jump though more hoops to make it happen -- it might tell your in what .t file and where the failing test happened.
The Test::More/Test::Simple family of modules report a lot more than that on a test failure. At a minimum they report the test number, line number and filename where the test occurred.
The documentation for these testing modules strongly encourages you (see footnotes in "readmore" section below) to give each test a name. e.g.:
is($obj->name, "Bob", "object has correct name");
If that test fails, you get a report along the lines of:
ok 1 - object can be instantiated
not ok 2 - object has correct name
# Failed test 'object has correct name'
# at t/mytests.t line 12.
# got: 'Alice'
# expected: 'Bob'
ok 3 - object can be destroyed
This makes it pretty easy to see which test has failed, and why it's failed.
| [reply] [d/l] |
|
|
|
|
|
|
|
|
|
|
Thank you, this feels very interesting !
I kind of wish someone will join in, and say why they like and don't like '.t' files and coverage tools, to make it more "spicy" and understand why so many people use these.
It could just be a fashion, as you said, but i guess if people keep on using them it's because they like something in them. Or just because they got used to them, and keep on hearing so many people say how wonderful it is, and how it's going to get even better.
Like i said before, i am myself just very much starting to use testing : 4 months ago i didn't even know it existed, and since then i have mostly read (very positive comments) about it in several books, and not used it.
I am going to start now, and am trying to choose how to start. Your thoughts are like honey to me : thanks for this precious food!
Not that i think you are totally right : i will need to make my own opinion by experimenting...and my own way of developing.
But that you challenge many things i have learned, propose alternative paths, and make me think about how i want to relate to testing :
- what it will be for me ?
- should testing lead my development, or should my development use testing as a tool among others.
- testing coverage or not testing coverage? I guess, a little pinch of it can maybe help to not forget huge untested areas behind... I will have to try and feel what it does.
- testing or not testing ?
errrrrr..... not that one (^c^)
- spending hours imagining and creating many "edge cases" tests to feel safer, or creating them "on the go" when i feel i need to test something and/or debug ?
I'll get my own sensibility by trying, and by listening to people sharing theirs.
I would like to take my message and yours, and share them in "Perl meditations" or "Seekers of wisdom" again, to collect more impressions.
As the subject has shifted from developing in general, to testing alone. And now that this message "has been posted a 'long' time ago", less people might see it.
But i am also an inexperienced Perlmonk, so my ideas are naive (don't take into account the context a lot, as i don't know the context very well).
What do you think ?
| [reply] |
|
|
|
|
|
|
|
| [reply] [d/l] |
|
|
| [reply] |