http://qs1969.pair.com?node_id=703390
Category: Database Programming
Author/Contact Info Roman Chemisov gangabass@gmail.com
Description: Sometimes you need to create several databases and upload same SQL dump to each database. So this little script help you complete this task. You can run it like so: $batch_databases.pl --host="localhost" --user="root" --password="secretpass" --create_db="test_db" --create_user="test_db_user" --create_password="test_user_password" --dumpfile="test_dump2.sql"
#!/usr/bin/perl

use strict;
use warnings;
use DBI;
use Getopt::Long;

my ( $DB_HOST, $DB_USER, $DB_PASS, $DB_NAME, $create_db_name, $create_
+user_name, $create_password );
my ( $batch_file, $BLOG_PASS );


my $result = GetOptions (
            "host=s" => \$DB_HOST,
            "user=s"   => \$DB_USER,
            "password=s"  => \$DB_PASS,
            "create_db=s"  => \$create_db_name,
            "create_user=s"  => \$create_user_name,
            "create_password=s"  => \$create_password,
            "dumpfile=s" => \$batch_file,
            );

my $drh = DBI->install_driver("mysql");

my $rc = $drh->func('createdb', $create_db_name, $DB_HOST, $DB_USER, $
+DB_PASS, 'admin') or die $drh->errstr;

my $dbh = connect_to_db( $create_db_name, $DB_USER, $DB_PASS, $DB_HOST
+ );

my $sth = $dbh->prepare(qq{
    
    GRANT ALL on $create_db_name.* TO ?\@'localhost' IDENTIFIED BY ?
});
    
$sth->execute( $create_user_name, $create_password ) or die $dbh->errs
+tr;

my @statements = read_batch_file($batch_file);

foreach my $statement (@statements) {
    
    $statement =~ s{[^[:ascii:]]}{ }g;
    
    $dbh->do($statement) or die $dbh->errstr;
}

sub read_batch_file {
    
    my ($filename) = @_;
    my @statements;
    
    my $sql_dump;
    
    unless ( -e $filename ) {
        
        warn "Could't find such batch file. Something wrong with filen
+ame?\n";
    }
    else {
        
        open my $source, "<", $filename or die $!;
        
        {
            
            local $/;
            $sql_dump = <$source>;
        }
        close $source;
        
        remove_sqlyog_comments(\$sql_dump);
        remove_phpmyadmin_comments(\$sql_dump);
    }
    
    @statements = split ";\n", $sql_dump;
    
    return @statements;
}

sub remove_sqlyog_comments {
    
    my ($sql_dump_ref) = @_;
    
    ${$sql_dump_ref} =~ s{/\*.+?\*/;?\n}{}gs;
}

sub remove_phpmyadmin_comments {
    
    my ($sql_dump_ref) = @_;
    
    ${$sql_dump_ref} =~ s{^--.*?\n}{}gm;
}


sub connect_to_db {
    
    my ( $database, $user, $password, $hostname ) = @_;
    
    my $port = 3306;
    my $dsn = "DBI:mysql:database=$database;host=$hostname;port=$port"
+;
    my $dbh = DBI->connect( $dsn, $user, $password );
    
    $dbh->do(qq{SET NAMES 'utf8'}) or die $dbh->errstr;
    
    return $dbh;
}
Replies are listed 'Best First'.
Re: Batch database creation
by graff (Chancellor) on Aug 10, 2008 at 22:10 UTC
    Just a few points I'm wondering about:
    • Since the script only installs data on one server per run, and all the host/user/db-name params are needed on the command line for each run, is it really any more convenient than just doing the appropriate mysql command the same number of times?

    • Why do you modify the content of the dump file as it goes through your script? I'm referring to the part that replaces every non-ascii byte with a space in the main foreach loop -- this seems especially strange, considering that your connection process includes "SET NAMES 'utf8'". What if some of the database tables are supposed to contain non-ascii data?

    • (The other parts where you modify the dump file contents, removing lines that start with "--" and "/*...*/", strike me as good reasons to stick with the native "mysql" command-line tool, which already knows what to do with these things.)

    • Have you benchmarked this against using the comparable mysql command line? I suspect that for doing lots of inserts to tables that are being created, the DBI::do() method will end up being many times slower than the native utility provided by mysql. (You could add stuff to your script to try optimizing, but you would just be re-inventing the stuff that the mysql command-line tool can already do, and the result would still be slower.)

    • Why load the entire "batch file" (which I assume is a normal dump created by "mysqldump") into scalar variable in memory -- and then also make a complete copy of it as an in-memory array? What if it's a really huge database?

    • Do you really need to specify the port number for the connection?
    If the goal is to load the same data to several hosts/servers, I think you would want a script that works with a mysqldump file and a list of targeted servers (giving hostname, username, password for each one), so that one command line run can do all the loads. This would be easy, esp. if you just have a script that will build and run the native "mysql" command line for each server in the list.
Re: Batch database creation
by alexm (Chaplain) on Aug 10, 2008 at 20:30 UTC
    local $/; $sql_dump = <$source>;

    SQL dumps are usually very large, so slurping the whole dump in memory doesn't seem the right thing to do.