in reply to Virtual/distributed database with DBI

What everyone else says about the hosting company.

Why 100MB? What's special about that number? Why should it even be a problem? MySQL databases can be gigabytes in size without encountering problems. 100MB sounds like a completely arbitrary number.



Nobody says perl looks like line-noise any more
kids today don't know what line-noise IS ...
  • Comment on Re: Virtual/distributed database with DBI

Replies are listed 'Best First'.
Re^2: Virtual/distributed database with DBI
by Fletch (Bishop) on Nov 13, 2007 at 00:18 UTC

    And everyone knows no ISP or hosting provider ever sets any sort of arbitrary resource limits on anything . . . (sorry, didn't mean to drip that much sarcasm all over there)

    Hosting companies are contracting to provide a certain service. In order to do so they have to budget X worth of cpu/disk/network bandwidth/sysadmin overhead for each customer. While MySQL in and of itself on a dedicated machine serving a single customer may be perfectly capable of handling much larger databases, caveat emptor if a hosting company's offering that level of service on a shared hosting platform.

    When things slow to a crawl or run out of space because they've oversubscribed their infrastructure you'll wish you'd gone with a more clued hosting company that either charged you more (because they're spending more on beefier hardware with fewer customers per each) or with a dedicated server that you're not sharing with everyone else and their dog's multi-gigabyte databases.

      That is all absolutely true, however... this particular case involves a hosting company that offers 100 x 100M databases, with no option to reallocate them as, say, 10 x 1G databases (or even 1 x 1G database, if you want to make the argument that it's easier to find a box with 100M available than one with 1G). Even if you're running them from a managed dedicated server. AFAICT, the only way they'll let you get a database over 100M is if you're on an unmanaged dedicated server, in which case they charge the same rates to provide less service.

      So, yeah, I'd say it's completely arbitrary...

        No argument that it's arbitrary, but the fact remains that's the size they've picked.

        Their infrastructure is probably setup to handle homogeneous databases of no-more-than-100M in size, and that's going to color things from backups ("when we hit n more databases we need to add a new tape drive to the pool; oh wait, 4% of these are 1G so that means . . . hrmm, carry the 7 . . . .") to recovery and/or migration ("we need to move m databases around to new servers; oh gosh, 6% of them are 1G so wait we've got to count those as 10 database chunks that can't be broken . . . or do we treat it as a separate class of customer, eww but now I've got to budget two pools of machines").

        I don't dispute that the choice of 100M was likely plucked out of someone's nether regions, but my point was that one could reasonably justify the limitation separate from the fact that 'ZOMG MYSQL CAN DO 1G DB THATZ CRAZY' (paraphrasing here, of course :). It sounds as if they've chosen to have two classes of service (100M cookie cutter chunks (but hey, you can have lots of cookies!), or entire server blob of cookie dough) to simplify things for themselves. If you don't fit one of their classes (needing a 1G cookie that's managed for you . . . mmmmm, managed cookie) you're probably going to be better off finding provider that offers the right sized cookie for your needs than you are cobbling together your own adhoc duct taped frankencookie (which they're not going to help you with either, just like they wouldn't support you with their unmanaged dedicated server offering).