powerhouse has asked for the wisdom of the Perl Monks concerning the following question:

We are trying to create a backup system for our MySql databases. We use Perl in 90% of our programming.

We currently have over 225 tables, and most of them have nearly 100k records. Some of the records have comma's, quotes, periods and all kinds of 'characters' therein.

What I need is a way to backup our tables on a regular basis. My question is this, is this (MySQL::Backup) a safe way to copy the data, without it getting data mixed up because of the content of some of the records(like comma's)?

Is this the most efficient way to have my Perl code do it automatically?

I don't want to start using it on our live systems without knowing it's capabilities and limitations, so I would appreciate any feedback you have about it. I have read the doc's on it, but they don't go too much on it's capabilities and data handling of inserts or copied data.

Thanks in Advance for any assistance,
Richard

Replies are listed 'Best First'.
Re: question about MySQL::Backup
by moritz (Cardinal) on Jul 02, 2008 at 09:11 UTC
    I'd just use mysqldump to export your databases into a fresh set of files.

    You can use a perl script to launch that, but a simple cronjob might even be enough.

    No need to re-invent the wheel in perl if mysql does the same already in C ;-)

Re: question about MySQL::Backup
by actualize (Monk) on Jul 02, 2008 at 09:23 UTC

    After looking up the module on CPAN, I think I wouldn't mind using it on my own personal databases. However, if my job was on the line and I were asked to backup a database consisting of 225 tables and 100k records, I would bite the bullet and use mysqldump.

    If you really want to use perl for this task, why don't you call mysqldump in a perl script that passes it variables specific to your database and run it as a cron job?

    It looks like a good module but I'm not sure if its "production ready".

    -Actualize
Re: question about MySQL::Backup
by bradcathey (Prior) on Jul 02, 2008 at 12:36 UTC

    Good question and we faced this issue with 2 dedicated servers.

    We came up with 2 approaches:
    1) encrypted mysqldump, compressed and copied to a remote server as "snapshots" hourly via a cron script. Backups over 10 days are deleted.
    2) mirror copy of every transaction to a remote server. Surprisingly this causes no noticeable performance hit.

    —Brad
    "The important work of moving the world forward does not wait to be done by perfect men." George Eliot
Re: question about MySQL::Backup
by dragonchild (Archbishop) on Jul 02, 2008 at 13:46 UTC
    As a MySQL DBA, I would only use those tools recommended by MySQL AB themselves. This would be mysqldump or, for InnoDB tables, InnoBase provides a hot backup solution which you can find on their website. Anything else is just asking for trouble.

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: question about MySQL::Backup
by 13warrior (Acolyte) on Jul 02, 2008 at 14:27 UTC
    Hi, In our Company, we do regular backups of our databases. We always prefer to do a binary backup i.e shutdown mysql and do a tar-gz on the data dir. A perl process is spawned to do this via a crontab. I guess this is a safe and tested approach if you can spare the database to be down for some time.If you are using innodb , there is a mysqlhotcopy script that can do the backup.
Re: question about MySQL::Backup
by igelkott (Priest) on Jul 03, 2008 at 19:25 UTC

    Agree completely that MySQL tools are the way to go. Would just like to add (OT) that you might want to look into the docs on logging (especially binary logs) and replication. Cold backups are good but certainly not the only option.