Will you be chaining the scripts together in many different ways? Will you be doing anything with the data in between scripts?
You can probably get a long (on unix, at least!) with a driving shell script and real pipeline, for the simpler things -- no need for perl and system. If this starts growing and you need more power, you might prefer to refactor the individual scripts into modules (see perlmod and perlmodlib if you aren't familiar with how to do this), and have the main script drive the various modules in the same process.
It's quite possible to let a module have a standalone entry point (sort of like a "main" function) for using it from the command line, while still offering a programmatic interface. This lets you mix and match your usage to whatever's convenient. The trick is to put this somewhere near the beginning of your module:
if (!caller) {
main();
exit;
}
sub main { # only used when called from the command line
# ...
}
| [reply] [d/l] [select] |
Thanks for the help, gaal,
In principle, the scripts could be called in various situations,
which is why I wrote them individually. They do relatively generic,
simple things, like converting encodings, stripping lines, inserting
XML fragments, and so don't really belong together in a single
module. So using caller sounds like a good idea.
As to what happens between scripts: nothing. One script reads a
file and writes another. This new file is read by the next script, and
so on. So in the wrapper script I have to hard-code the names of the
intermediate files, which seems somewhat clunky.
loris
| [reply] [d/l] |
In that case, look at it this way: you have your standard unix toolkit. grep, find, sed, sort and so on; and you've added to it a number of new tools. That's fine and your choice to make them seperate is completely reasonable. Now, suppose you were to write a solution for a slightly different task, one that didn't have anything to XML or your current problem set. Would you use /bin/sh? Would you use Perl? In many cases either one makes sense... now you just have to make sure your tools aren't clunky, so you can use them with the same ease as you'd use grep and friends in a shell script. Making it possible to use standard input and output helps -- you can certainly pipe perl scripts with one another.
| [reply] [d/l] [select] |
If your working on Win32 and using ActiveState, investigate the utility pl2bat.pl that you'll find in your perl\bin directory.
It will allow you to convert you perl scripts to bat files that then allows you to use the normal piping method of chaining without requiring you to type
perl script1.pl infile | perl script2.pl >outfile
which is a pain.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon
| [reply] [d/l] |