in reply to Application deploymentt

What is the best (and ideologicaly correct) way to deploy application dependencies.
First question: deploy where? A set of machines you have total control over? Some machines internal to the company? All identical machines? Random machines worldwide running random OSses?

Second question: deploy when? Automatically when a new version is available? Pushed on demand? Pulled on demand? Does it get rolled out one-machine at the time? All machines? Some machines?

Third question: have you exhausted all ready made tools and solutions already available? Why aren't they working for you? My "best" way usually does not mean "not using what's already there".

Replies are listed 'Best First'.
Re^2: Application deploymentt
by Anonymous Monk on Feb 07, 2012 at 13:46 UTC

    Servers. Different OSes CentOS/Gento/Debian. Later I plan to use same os everywhere if it will be possible. Yep, total control, internal company

    Deployment when needed. When new feature is ready it pulled to server from git repo. Rarely - tgz. Different branches can be rolled out separately. Also test deployments

    I have encountered two problems: raid failure on one of prod servers where we had no backup. So it was pretty slow and painfull to install all deps and get box running asap.

    Second - during fresh install one of deps had memory leak in version that was automatically fetched from cpan. Took some time to get it working.

      ...raid failure on one of prod servers where we had no backup...

      Ok, really? That isn't a problem with CPAN or modules. That is just being cheap and getting burned by it.

      ...during fresh install one of deps had memory leak in version...

      At first glance I thought this was the more legitimate reason to avoid using CPAN but then again, how do you know that any shared library you are upgrading to on a production system isn't going to bring about a similar problem?

      Celebrate Intellectual Diversity

        Sure it is not CPAN problem. But I might solve future failures (if any) by having local::lib in git repo.

        While testing/develpoing, I can see that libX ver.123 got memory leak/bug/etc. So I push to local::lib ver.111 that has no problems. In this way - there is no need to install ver.111 manually on multiple servers. Just pull, everything from git repo, that was tested, and confirmed to work previosly.

      CentOS, Gento, Debian are all flavours of the same OS in my book.

      Anyway, I'd use puppet or cfengine or something similar to that.

      I have encountered two problems: raid failure on one of prod servers where we had no backup. So it was pretty slow and painfull to install all deps and get box running asap.
      What can I say? Incompetence? First of all, not having backups should, IMO, be a firing offense. No second chances. Second, you ought to be able to clone a new production box in a matter of minutes. Automatically. You should have an install server in your maintenance network that just dumps a complete OS tailored to your needs on a freshly booted box.
        First of all, not having backups should, IMO, be a firing offense. No second chances.

        That depends on the curcumstances. If the box that failed was one of a cluster of many identical ones behind a load balancer, and the suplicant thought that he could rebuild a replacement fairly quickly then it is reasoable not to backup individual boxes, just the configuration necessary to rebuild one.

        I am also responsible for a number of production boxes that are not backed up, but they are part of a redundant cluster, and the configuration to build another is in a Puppet server, so if one fails, I can recreate another one in an hour or so.