Don't use a local uncontrolled perl with a shared controlled module library path. You're asking for headaches, especially when combining modules with XS in them, crossing the 5.9 boundary (5.8 is not binary compatible with 5.10, there are probably other examples).
Instead, compile perl for your platform (linux/x86-64, it appears), and tell it that it will get installed to the shared filesystem. Make sure, too, that this shared filesystem is mounted the same on all machines (or symlinks make it seem that way). For example, /share/perl/5.8.8/{bin,lib}. Well, you should use something newer than 5.8.8 if at all possible. Then, use the shared perl instead of the local perl. Instead of #!/usr/bin/perl at the top of your scripts, use #!/share/perl/5.8.8/bin/perl. Mind you, this only really works if you have one platform. If you have multiple platforms, e.g., Linux and AIX or Sun or HP, or even Linux on x86-64 and Linux on x86-32, you'll probably want to install to platform-specific directories, have a shared library path, and then rely on the PATH to be set up properly to find the right perl.
Hope that helps.
| [reply] |
Thanks for the valuable inputs.
If I wish to run the perl programs on multiple platforms, would it mean that I need to create the servers (common server with perl and modules shared) for all the different platforms ?
Is is safe to share the perl ? Suppose I wish to run a script xyz.pl from 10 different systems accessing the shared perl and modules; would it create any problems ?
| [reply] |
/share/linux86-64/bin
/share/linux86/bin
/share/linuxia64/bin
/share/linuxppc64/bin
/share/hprisc/bin
/share/hpia64/bin
/share/sunsparc/bin
/share/sun86-64/bin
/share/aix/bin
# etc.
/share/common/perl/lib
Now, if, when you compile perl, you tell it the appropriate bin directory and the common perl lib directory, for where to put executables and libraries, respectively, you'll have a single share with everything you need. I assume you don't need all of the above, but you may.
The next requirement for this is that anyone on, say, HP/ia64 (Itanium) will have to add the correct bin directory to their $PATH, e.g., PATH=/share/hpia64/bin:$PATH. This is so that the new perl is found first.
Finally, your xyz.pl script would have to start like this:
#!/bin/sh
eval 'exec perl -S "$0" "$@"'
if 0; # this line keeps it from being seen by perl.
This will load the shell, which will evaluate the string, which will cause it to exec (replace itself with) perl with the name of the script and any parameters passed along. Perl will load the script, ignore the first line (the #! line doesn't have "perl" in it), start executing with the second, see the eval over two lines, see the if 0, and the compiler should optimise it away. And then it will merrily go on to the rest of your code.
Now, all this said, when you compile perl, you should have the opportunity to hard-code some extra paths into @INC. My recommendation? Do so. Hard-code in a location for your OWN modules (not the ones you're installing from CPAN). For example, /share/common/perl/locallib. And then you can put your modules here. I say this largely because I'm of the opinion that every(*) perl script that is intended to last more than the day it's coded on should look like this:
#!/bin/sh
eval 'exec perl -S "$0" "$@"'
if 0; # this line keeps it from being seen by perl.
use lib ...; # if necessary.
use My::App; # or whatever it's called.
my $app = My::App->new();
$app->parse_args(@ARGV); # passing in @ARGV is recommended but not req
+uired
exit $app->run();
Basically, load the module that has the real code, create the app object, tell the object to parse the arguments, and then tell the object to run, exiting with whatever return code it returns. The reason for this is simple: it makes it easier to write unit tests for your code. It also makes it easier to embed your app within another one, but that's the same as writing unit tests, since unit tests will generally embed the app within the .t file (by doing the same as the above).
(*) Ok, there are other exceptions, too, but it's a general rule for me.
BTW - this is an NFS share with hundreds(!!) of developers using it. Our build environment is built in perl (using make underneath), so even our C/C++ or Java devs will use perl, without knowing it. The share also has the Windows perl on it, and is shared via samba. While I'm sure that the machine has Gigabit networking, I don't think caching has been an issue. By default NFS (and I assume samba) already does do some caching anyway. | [reply] [d/l] [select] |
Is is safe to share the perl ? Suppose I wish to run a script xyz.pl from 10 different systems accessing the shared perl and modules; would it create any problems ?
No. It might create lots of traffic on your share, so look into caching :)
| [reply] |
If the perl version on the other system is different than that of the perl version of the system where the modules are installed,
This is why its better to mount perl, and instruct the users (or their programs shebangs) to use the mounted perl
| [reply] |
I think that it is a good idea, specifically in the case of XS-based code such as DBD drivers, to install and to maintain those libraries “per machine,” specifically to avoid creating unwanted future dependencies when ... inevitably ... one sexy new machine arrives but the budget does not yet allow for all of them to be replaced. So, while you build an identical set of packages on each machine, you build them on each machine so that CPAN and the various configure scripts will correctly capture that machine’s individual quirks.
From a software perspective, I like to use a sort of “stub loader” module, which basically has a fixed use lib statement in it (so it can find what it needs to run ...) whose sole purpose in life is to configure the @INC path that is needed by the “real” application (object), to instantiate that object, and to tell it to run(). In this way, the inevitable environmental dependencies are placed in the stub, and in the stub alone.
One final tip, that I learned the hard way, is that you should always encode some kind of version-identifier into that application-object, and the stub should check it. In fact, the stub is an excellent place to do all kinds of “do this just once at initialization time” activities. This would be not-so useful in the case of “ordinary CGI,” but I doubt that folks do too much of that anymore.
| |