I would like to break up a really long perl script that I have into multiple files
perldoc perltoot
perldoc Exporter
What you want to investigate is creating modules that
export functions and groups of functions.
| [reply] |
I really don't think it's necessary to make this script into a module as its pretty specific
There's nothing wrong with writing a very specific module. I do it all the time. It makes for a nice way of pulling things apart into manageable chunks. There is a good tutorial -- Simple Module Tutorial -- here at the Monastery. I suggest you take a look if you're not familiar with creating modules, and don't be afraid. It's a simple and rewarding process!
| [reply] |
Please check out this node Re: Perl library or new module? as the information I believe applies to your question also.
"No matter where you go, there you are." BB
| [reply] |
Modules don't have to be of general (read generic) use only. The purpose of modular design, is to make difficult things easier. That can mean code reuse, or any of a number of other reasons for modularizing your code.
If your difficulty is too big of a program file (more than a few hundered lines, give or take), it may be time to factor some of the code out into modules. One of the beauties of this approach is that once you have the details of a module hammered out, you can consider it "done", and move on to working on other sections of code. At that point, so long as you stick with the module's interface, you usually don't need to worry that a change somewhere else in your script might ripple a problem into the other module.
Modularity isn't just about code reuse. It's also about namespace segregation, divide and conquer coding, and keeping code chunks bite-sized for easy digestion. (and object orientation, and, and, and....)
| [reply] |
I would suggest to put all the "service routines" in a module and keep your "main" script to a bare minimum, calling the various subroutines in the module instead.Note that having global variables is to be limited as much as possible (some purists say to avoid it at all costs) and that spanning your global variables accross files is not a good idea (unless you really know what you are doing and there is no other elegant way to do it). Certainly do not rely on global variables in your main script to be used by the modules' subroutines. Instead pass your data as parameters to your subroutine calls. There is a very nice new book "Perl Medic" which deals exactly with this type of things.
CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law
| [reply] |
These are the candidates:
perldoc -f do
perldoc -f require
The PerlMonk tr/// Advocate
| [reply] [d/l] |
Where has this come up before ... where ... where ... ahh just a few questions ago. Must be that re-factoring time of year.
| [reply] |
Refactor your code into subroutines. Nothing wrong with small subroutines, there IS something wrong with routines that extend over more than 30 lines, certainly 50 lines is a problem. Well, actually, there is one thing wrong with very small routines ... it it's invoked a few million times, the expense of invoking a function far exceeds the time required to execute the actual code. At that point, you have two options:
- Go to Mark Jason Dominus' web site, and read as many articles as you can. When you get tired of him, got ot merlyn's web site, and continue reading. When you can't take it any more, go back to MJD's articles. You get the idea. Follow the examples, especially MJD's Red Flag articles, and by this time next year the program will no longer be too long. You might also consider whether you're using an inappropriate or too-primitive algorithm for your application.
You could follow the paradigm of DOS programs from the 640K days .... Group the routines into categories and put each category into a separate files. The modern, clean, reliable and scalable way to do this is to give the files a name ending with ".pm", say "Update.pm", and have package Update; as one of the first lines in the file. Of course, this is a module. See merlyn's articles or perldoc for more. The old-fashioned, unreliable, un-robust, un-scalable way ( you didn't think it got replaced simply cause the Perl developers got bored, did you? ) is to make the file an ordinary perl script, and invoke it by using require "update.pl" ... assuming that's the file's name, of course. The subroutines will be available to be invoked, any code outside the routines will run at the time you "require" the script.
Of course, in the DOS days, the overlay replaced the previous code, gaining maximum use of the 640KB that no one wouold ever need more than. Today, as you require one script, then another, your memory usage will increase, but who cares. I've got 512MB on my machine, and that's old-fashioned.
--
TTTATCGGTCGTTATATAGATGTTTGCA
| [reply] [d/l] [select] |