in reply to (OT) Fixing OSX's biggest weakness as a dev platform

The way I do it is to create a disk image that contains a case-sensitive, journaled, HFS+ filesystem. Then I mount that as /usr/local, and install all my development stuff in /usr/local (which is where it will get deployed when it goes into production).

This also lets me switch between versions of the development platform or event between projects, by just mounting a different disk image on /usr/local.


We're not surrounded, we're in a target-rich environment!
  • Comment on Re: (OT) Fixing OSX's biggest weakness as a dev platform

Replies are listed 'Best First'.
Re^2: (OT) Fixing OSX's biggest weakness as a dev platform
by dragonchild (Archbishop) on Apr 11, 2008 at 00:57 UTC
    Huh. I didn't think about disk images. That's a pretty neat solution and probably better than mine.

    This does beg the question - why doesn't OSX put all the bootable requirements on disk and put everything else inside a dmg?


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

      Disk images are great, but there are two reasons OSX doesn't "put everything else inside a DMG":

      1. Performance: you're adding a layer of indirection, since the dmg is a file on the "real" filesystem, reading or writing a file to it requires altering a file on another file system. This is not very fast; and while it may be "fast enough" for some things, it won't be for others.
      2. Reliability: if a disk gets a bad sector, the worst case scenario is that the data on that sector is lost. If a DMG file gets corrupted, that whole "disk" is damaged, often beyond repair.

      In short, using disk images for development tasks is smart (you're keeping stuff in source control, and backups of that anyhow, right? Right?! :D), there's a lot of things it wouldn't be smart for.

      <radiant.matrix>
      Ramblings and references
      The Code that can be seen is not the true Code
      I haven't found a problem yet that can't be solved by a well-placed trebuchet

        1. Performance: you're adding a layer of indirection, since the dmg is a file on the "real" filesystem, reading or writing a file to it requires altering a file on another file system. This is not very fast; and while it may be "fast enough" for some things, it won't be for others.

        You would be hard pressed to find a scenario where this difference was even measurable. When it comes down to it they are both exactly the same thing, a series of blocks on a disk. The fact that you might have to do one extra lookup to find out where those blocks reside is fairly meaningless in the real world. There are plenty of other situations where people routinely add one (or more) levels of disk indirection on production systems (LVM and RAID for example).

        2. Reliability: if a disk gets a bad sector, the worst case scenario is that the data on that sector is lost. If a DMG file gets corrupted, that whole "disk" is damaged, often beyond repair.

        From the point of view of both the operating system and the data contained on the disk, there is no difference between a filesystem on a physical disk and a filesystem on a disk image. If you lose a block of data on either one, the effect is going to be identical regardless of whether it was a physical disk or not. The tools you would use to recover from losing a sector on a physical disk will work exactly the same on the disk image.


        www.jasonkohles.com
        We're not surrounded, we're in a target-rich environment!