Your description is too vague to do more than guess at the problem, but one possible source of confusion might be that import doesn't export the functions you're expecting it to because the initialization code for @EXPORT et al has not yet run.
Try surrounding the code with BEGIN { } to make sure any necessary initialization happens at compile time. This is done implicitly when you use a module — if you include code directly in your script, you have to provide for this yourself.
I am not sure I understand which variables you need to declare. Are they in package main, but referenced from other packages?
Makeshifts last the longest.
| [reply] |
Yes, the packages did have use Exporter, @ISA, and @EXPORT properly defined.
Aha... I think I found the problem. The poorly written code seems to have a mismatched brace, causing the package main; to operate within a code block.
I tried to duplicate my situation with some code, but apparently it didn't complain about varaiable declaration. i.e.:
package MYMODULE::THING;
use Exporter;
@ISA = qw(Exporter);
@EXPORT = qw(Func1 Func2);
sub Func1 {}
sub Func2 {}
package main;
import MYMODULE::THING;
$somevar = 1;
@somearray = (1, 2, 3);
...
...works fine.
It appears I need to dig further into this junk. Thanks. | [reply] [d/l] |
I wish to append all module code into one file to eliminate dependancy on the module files.
This is a very bad plan. You will almost always want to factor out most of the code to modules. Bringing everything in to be a monolithic script usually fails the sniff test.
As for your issues ... that's because "use" does a bunch of work behind the scenes for you. The important item that's missing is calling the import() function. So, you could hack it up as follows:
use Foo qw( foo bar );
---- becomes ----
package Foo;
# stuff here ...
package main;
Foo->import( qw( foo bar ) );
As I said before, I strongly recommend against this course of action, but, as usual, it's possible to do stupid things.
------
We are the carpenters and bricklayers of the Information Age.
Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose
I shouldn't have to say this, but any code, unless otherwise stated, is untested
| [reply] [d/l] |
It's a lousy perl script to begin with, and I'd be humiliated if I had to admit that I wrote it. Fortunately I did not. But, you have me thinking here about the pros and cons of single file w/ modules or separate module files.
The reason I'd want one file is to make it easier to admin when updates are made available, which happens often. Testing can be done easier by simply running a separately named script. The script and supporting files are hardcoded to run out of a specific directory, so I cannot create a new one with updated module files.
But anyway, it appears module files aren't the true problem. It's the bad code design and unnecessary dependancies that are at issue.
Hmm.... Thanks for making me think harder about this.
| [reply] |
The reason I'd want one file is to make it easier to admin when updates are made available, which happens often. Testing can be done easier by simply running a separately named script. The script and supporting files are hardcoded to run out of a specific directory, so I cannot create a new one with updated module files.
I'm going to address each of these in a separate comment.
- The reason I'd want one file is to make it easier to admin when updates are made available, which happens often.
This is a fallacy. It's actually easier to manage changes made to very small items that are independent from one another. That way, you know that the change's impact is limited to a very small area. This is called decoupling, and it is an extremely important concept in change manamagement.
- Testing can be done easier by simply running a separately named script.
It is much easier to test small, independent items than it is to test a big complex thing. For one thing, you can more easily determine what the failure cases are. Plus, if you have a monolithic thing, you have to take into account every interaction every time you make a change. If you have a bunch of small, decoupled bits of code, you only have to worry about how the one piece you're looking at behaves. It's easier to work with 100 lines at a time than 10,000 lines.
- The script and supporting files are hardcoded to run out of a specific directory, ...
Just because it's only ever going to run with one set of parameters doesn't mean you should limit yourself to that. Hard-coding is almost never good, outside of a configuration file. For one thing, you can't test the code outside of a production environment. If you have a configuration file, you can test in a temporary directory and feel confident that it will work in the production directory.
I once spent two days adding configuration to a database connection class so that I could test it. I've yet to actually work with more than one database, but the ability to add 300 tests was absolutely critical. For one thing, I fixed a bug that had been annoying me (and my users) for three months. Impossible to do if I had to work directly with the database. (As in, impossible for me to make the effort to do it, not impossible from a technical sense.)
------
We are the carpenters and bricklayers of the Information Age.
Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose
I shouldn't have to say this, but any code, unless otherwise stated, is untested
| [reply] |
I'd be humiliated if I had to admit that I wrote it.
Undoubtedly we've all written code that a few years later we look back on and regret decisions made. It's important to remember that the code is what's evaluated and not the person.
It sounds like you have two issues to deal with (if I can guess at what's behind your questions). The first is how to improve the code. The second is how to implement a test suite or test process. Those are both broad topics and Super Search on 'Refactoring' and 'Test Suite' should give you good reading.
Testing can be done easier by simply running a separately named script. The script and supporting files are hardcoded to run out of a specific directory, so I cannot create a new one with updated module files.
perl's -I switch exists to get around this problem. Use it to specify which directory to use modules from for testing.
--Solo
--
You said you wanted to be around when I made a mistake; well, this could be it, sweetheart.
| [reply] |