No argument that it's arbitrary, but the fact remains that's the size they've picked.
Their infrastructure is probably setup to handle homogeneous databases of no-more-than-100M in size, and that's going to color things from backups ("when we hit n more databases we need to add a new tape drive to the pool; oh wait, 4% of these are 1G so that means . . . hrmm, carry the 7 . . . .") to recovery and/or migration ("we need to move m databases around to new servers; oh gosh, 6% of them are 1G so wait we've got to count those as 10 database chunks that can't be broken . . . or do we treat it as a separate class of customer, eww but now I've got to budget two pools of machines").
I don't dispute that the choice of 100M was likely plucked out of someone's nether regions, but my point was that one could reasonably justify the limitation separate from the fact that 'ZOMG MYSQL CAN DO 1G DB THATZ CRAZY' (paraphrasing here, of course :). It sounds as if they've chosen to have two classes of service (100M cookie cutter chunks (but hey, you can have lots of cookies!), or entire server blob of cookie dough) to simplify things for themselves. If you don't fit one of their classes (needing a 1G cookie that's managed for you . . . mmmmm, managed cookie) you're probably going to be better off finding provider that offers the right sized cookie for your needs than you are cobbling together your own adhoc duct taped frankencookie (which they're not going to help you with either, just like they wouldn't support you with their unmanaged dedicated server offering).
We're looking for a Perl and Database Developer for Corporate Investments Group.
In reply to Re^4: Virtual/distributed database with DBI
by Fletch
in thread Virtual/distributed database with DBI
by dsheroh
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |