in reply to Distributing code to 100's of servers
Seems to me that a distributed source control system like subversion or SVK would be ideal for this.
Store the modules in a repository on your test machine and mirror that on your production machines. When changes pass your testing procedures they get committed to the master repository. The production machines then sync and update to get the latest code. With SVK, the changes can be 'push'ed to the mirrors, though I think the update can only be performed locally?
If anything goes wrong, you can revert back to the previous version. Either on individual machines, or at the master repository and force another update at the targets.
If that idea floats your boat then you have a decision to make about what to store in your master repository:
With this, you only need have one version of each new distribution at the master repository, but you end up having 4 copies at every production machine. The one in the mirror repository, the checked-out copy (al la CPAN packages directory), the 'blib' copy, and the finally installed copy.
You also have to have an ancillary process of going through the make/test/install cycle at each machine.
You have to have 1 platform-dependant installed image for each platform at the master repository, but only 2 copies on each target machine. The mirror copy and the checked-out copy that is also the installed image.
There is no need for the make/test/install and the update would be directly to the installed image (lib) directories.
Whether this is a viable option will depend upon how consistent your production machine installations are.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Distributing code to 100's of servers
by roboticus (Chancellor) on Oct 31, 2007 at 13:35 UTC |