I work with a system that generates a steady flow of outbound files. These files need post-processing depending on who they're being sent to, and this is being done via Perl. Some files need only to be moved to a particular location, but others need to be renamed based on their contents, and some need to be verified against an Oracle database and have their contents modified. Some require all these mods and more to come.
Point is: originally I wanted the script to load the necessary post processes (extensions) on the fly. (i.e. Why should it load DBI and Oracle::DBD if all it's doing is moving a file?) I achieved this by writing separate packages for each necessary routine and loading them with a require statement. The packages would all have a subroutine called "process" that took the identical argument, a hash ref containing the list of files to modify (and some other parameters).
Example:
If the outbound files were for PartnerA, the program would read a little config file and see that PartnerA needed the following extensions: modify, rename, dbread, and move--in that order. Those extensions would be pushed into an array @ext:
foreach $ext (@ext) { eval "require $ext" $args_ref=$ext->process($args_ref) }
The above is written. It works. And yet, it's difficult to maintain and I wonder if my initial aversion to loading modules I'll never use was worth the current price of maintenance.
And so I pose a question to the monks: is the above a reasonable idea, or is there a better way, like perhaps a simple chain of separate Perl programs rather than dynamically-loaded modules? The only requirement is that there be one and only one central "controller" script that launches the other processes. I can't build a bunch of different scanners for different partners--we have too many.
Many thanks,
kurt_kober
In reply to Is it wise to dynamically load modules? by kurt_kober
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |