in reply to Distributing code to 100's of servers

Seems to me that a distributed source control system like subversion or SVK would be ideal for this.

Store the modules in a repository on your test machine and mirror that on your production machines. When changes pass your testing procedures they get committed to the master repository. The production machines then sync and update to get the latest code. With SVK, the changes can be 'push'ed to the mirrors, though I think the update can only be performed locally?

If anything goes wrong, you can revert back to the previous version. Either on individual machines, or at the master repository and force another update at the targets.

If that idea floats your boat then you have a decision to make about what to store in your master repository:


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re: Distributing code to 100's of servers

Replies are listed 'Best First'.
Re^2: Distributing code to 100's of servers
by roboticus (Chancellor) on Oct 31, 2007 at 13:35 UTC
    I second BrowserUk's suggestion. I've built a system built on CVS to manage code distribution on a number of systems, and it works well. I didn't build modules like absolut.todd suggests, we just have a directory hierarchy that we check out as necessary.

    Basically, it works like this:

    Each morning, the computers compare CUR_PROD_VERSION.sh with their local stored copy. If it's different, they check out CUR_PROD_VERSION.sh, which is a trivial shell script that checks out the latest production version (using a tag).

    ...roboticus