Re^3: Developing a module, how do you do it ?
by BrowserUk (Patriarch) on Mar 02, 2012 at 04:44 UTC
|
An alternative idea for having "non-boolean tests" directly in the source. What i don't like about it, is that it makes the source less readable : it becomes harder to search the interesting parts of the code.
I think you got the wrong end of the stick about this.
In the normal way of things, a module, Your::Thing.pm, looks like this:
package Your::Thing;
## The subroutines and stuff that implement your module
1; ## finish with a true value to satisfy require/use
And the tests live in a bunch of .t files.
The "alternative idea" is that Your::Thing.pm looks like this: package Your::Thing;
## The subroutines and stuff that implement your module
return 1 if caller; ## return a true value and exit when loaded as a m
+odule
package main;
### Write tests that use Your::Thing here.
...
You use the module in the normal way via use or require, and it works exactly as with the 'usual way' above.
To run the tests, you type: >perl \Perl\site\lib\Your::Thing.pm
And the code that comes after the return 1 if caller; statement gets run. It really is as simple as that.
And whilst the tests and code exist in the same file, the module code is above that line, and the tests are below it. There is no interleaving. Nothing is "less readable". Nothing is "harder to search" for.
Indeed, maintenance is greatly simplified by having the code and tests within the same file.
Try it. You'll never go back :)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] [d/l] [select] |
|
|
Indeed, maintenance is greatly simplified by having the code and tests within the same file.
It can be, depending on the project.
I'm no longer a fan of the "you must have one .t file for each logical module in your program" rule. I prefer my test files to have their own logical relationships, where each file tests one function of the program in a particular way. For example, in the Pod::PseudoPod::DOM test suite, I have separate subdirectories for each type of formatter and separate test files for each category of transformation (escapes, environments, block-level items, tables, and so on).
This does require a little bit of testing infrastructure, but organizing more than a few tests is like organizing more than a few lines of code. You choose the best option based on what you're doing and you change it to work better when it becomes necessary or obvious.
| [reply] |
|
|
PseudoPOD?! Really? I mean, Really??
Update: I mean: Really, really!??
You really have nothing better to do with your time? You've never heard of Tex. and postscript and PDF and Word and HTML?
I'll stop.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] |
|
|
|
|
|
Re^3: Developing a module, how do you do it ?
by tobyink (Canon) on Mar 02, 2012 at 09:52 UTC
|
To me, the 3 of them seem to do exactly the same : give the information of "where are these sources that i need". PERL5LIB does it through an environment variable, lib does it in each source file, and -I does it on the command line. Any other difference?
Ultimately there's only one way of telling Perl where to search for modules: the global @INC array - it's a list of directories that Perl uses to search for libraries. PERL5LIB and "-I" are just ways of informing Perl to pre-populate @INC with certain directories.
And use lib just manipulates @INC while your code is being compiled. (You can look at lib.pm and see what's going on - there's nothing especially mysterious.)
They're each useful in different ways.
If you have some paths that you always (or nearly always) want Perl to search in, then use the environment variable. You can set it in your .bashrc/.tcshrc or some other script that is executed as soon as you log in, and forget all about it.
use lib is useful if you want to hard-code library paths that will be searched for specific scripts. It's not especially good for code you hope to publish for other people to install and use though, because they might plan on installing modules somewhere else. Relative paths can also be problematic with use lib because they are treated relative to the directory Perl was executed in, not relative to the file in which the use lib appears. (There are modules like "lib::abs" which improve the situation.)
The -I argument for perl is useful for most other situations. It tends to be what I use when I'm testing Perl modules I've written and intend to publish. I'll use a directory structure like this:
p5-my-module/
lib/
My/
Module.pm
examples/
demo.pl
t/
00basic.t
01advanced.t
xt/
01pod.t
02pod_coverage.t
Then I'll have my terminal mostly in p5-my-module, and I can run things like:
perl -Ilib t/00basic.t
perl -Ilib examples/demo.pl
I find that use lib can be handy used within test suites to help the test cases find additional testing modules (which might be inside "p5-my-module/t/lib/"). This is because although use lib isn't great for stuff that's getting installed on other people's machines, test suites don't get installed, and they are run in a very predictable way. | [reply] [d/l] [select] |
|
|
Beginner question (i searched vainly for this basic information) :
What happens if i install a module (make install), and then want to use its uninstalled development version?
Is this when i should use perl -I ? Will @INC react as i wish it will?
Thanks for the tips on tests BrowserUk, really good !
I will do it your way for now. And it's always possible to .t files if i ever want to.
Here are a few reasons why i think it could be worth the try:
- they are not all in the same file, so i don't risk to have variables 'collision' (same variable in two unrelated tests).
- it is easier for someone who installs the module (or even just myself after 6 months) to understand what happens, know in which file to look (and each of them are a lot smaller - no need to search around), and understand what 'ok' means without needing to think about how the testing algorithm works.
- it enables to use Coverage tools. If i understood properly what they do, thanks to them you will be less likely to forget to test any part of your code, which will result in easier maintenance.
Especially when you get this nasty bug in a part of your code that you didn't look at for 6 months
=> there will be a test there to clearly indicate what didn't work and where. You might have forgotten to put this test there if you hadn't use test coverage.
Thus, testing with .t files makes things easy for others and for yourself (in the future), and it doesn't seem to be so much work to set them up : just paste a piece of test code into a .t file and add some ok() statements... and you will have to use 'prove' quite often, which you could automate with an alias every time you run your code.
Well, it is quite some work to create these .t files, and the more you will create, the more you will think of adding ("and what if... ok, let's add a test). Which ends up in more Testing, and thus more time, less efficiency. I guess this is what testing is about : a bit more slowly, but surely. And thus you will gain time in the future.
Do these ideas make any sense?
Having said this, i am starting a project for which i don't have the time to learn and do all of this right now, so i will go with your method. Later i might try and make this project "safer". I should.
Thanks for the advice on how to use PERL5LIB, lib and perl -I, too. It's exactly what i needed to understand =o) Now i'll need to experience it. I guess i'll come back to read your message when i'm doubting about which method is more appropriate.
| [reply] |
|
|
Do these ideas make any sense?
To me, no.
- variable collision -- use scopes: { ... }; problem solved.
- Easier for someone who installs the module -- How so?
If the module is installed, and the tests are in the module, the tests are installed also. Even if they "installed" it by manually copying the file into the filesystem because -- as an recent post here had it -- it was the only method available to them.
- know in which file to look -- if it is all in one file, you know where to look.
- no need to search around) -- even once you've found the right .t file, you'll still need to locate the failing test.
And some of those .t file are huge and have hundreds of tests.
And very often the only clue you have is a test number, and working out which block of code relates to that test number means manually stepping through the file and counting. And that is often far from simple to do.
- and understand what 'ok' means without needing to think about how the testing algorithm works -- Sorry, but that is the biggest fallacy of them all.
When the result is "not ok", then you first have to locate the failing test (from its number), then you have to read the test code and try to understand what boolean test failed; then you have to try and understand what code in the actual module that failure represents; then you have to switch to the actual module and locate the code that failed; then you have to keep switching between the two to understand why; then you have to fix it.
And if that process is "easy", it probably means that it was a pointless test anyway.
- it enables to use Coverage tools -- Another fallacy.
Coverage tools are nothing more than a make-work, arse-covering exercise. I've seen people go to extraordinary steps to satisfy the coverage tools need to have every code path exercised, even when many of those codepaths are for exceptional conditions that are near impossible to fabricate because they are extremely rare ("exceptional") conditions. Hence you have a raft of Mock-this and Mock-that tools to "simulate" those failures.
But in the rare event that those types of failures -- disk failures, bus-failures; power outages etc. -- actually occur, there is not a damn thing your application can do about it anyway, beyond logging the error and dying. There is no value or purpose in testing that you do actually die in those situations for the sake of satisfying a bean-counting operation.
- there will be a test there to clearly indicate what didn't work and where. -- If only that were true.
It will tell you test number 271 failed. If you are lucky -- and you need to jump though more hoops to make it happen -- it might tell your in what .t file and where the failing test happened.
But it certainly won't tell you why it failed. Or where in the actual module the code that underlies that failure lives, much less why it failed. Was it the test? Was it the test environment? Was it the code being tested?
But *I* am the square peg in this. It seems that most people have, like you, bought into the cool-aid. I'm not here to tell you how you should do it; just to let you know that there are alternatives and let you reach your own conclusions about what fits best with your way of working. Good luck :)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
| [reply] [d/l] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| [reply] [d/l] |
|
|