I don't know what MySQLDump does, but there are two
basic approaches to backing up your database:
- Back it up as one big opaque chunk of data. This is
the easy way. All you do is get a read lock and
copy or tar up the directories where MySQL stores
the database. The read lock prevents any changes
from going into the database while you're making the
backup, so you get a consistent state. Anything that
was trying to make changes is just delayed until you
release the lock. This is the approach I use for
backing up my databases. I'm not sure what you
mean by "I don't want to involve system files or
directories", but the only files or directories
involved are the ones where MySQL stores your data,
which to my way of thinking are application files and
directories (MySQL ones in particular), not system
ones. But if by "system files or directories" you
mean that you don't want to involve the filesystem
at all (why?) then this approach isn't for you.
- The other way is to loop through all the records
in all the tables and back them up individually.
This is the more brittle
approach in some ways because you'll have to change
your backup script any time any of your tables change.
But it should work. Just make a list of all your
tables, do a foreach loop over them, and for each one
SELECT * FROM tablename, and store
complete copies of all the returned rows.
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,".rekcah lreP rehtona tsuJ";$\=$ ;->();print$/
| [reply] [d/l] [select] |
| [reply] |
| [reply] |
I think you could convert to CSV, Fixed or even an HTML table using DBD::AnyData and print it.
Reclaw
Your mileage may very.
| [reply] |