in reply to fast simple DB (sqlite?) skeleton?

thank you everyone. I tried to put together a couple of examples on how to do this simply. (bulk insert may or may not be appropriate, because if a record already exists, I want to add to it. this means I need to lookup first, and then decide.)

the results are actually quite interesting:
method File Size Speed
DBM::deep 20,191,941 83 seconds
Berkeley 1,323,008 1 second
DBI SQLite Synchronous=Off 950,272 21 seconds
DBI SQLite 950,272 118 seconds

Of course, there may be better ways to tweak the routines--I am an amateur at this. if I have made mistakes, or I should try something simpler, please correct me. for now, these are nice examples and should work as simple skeletons and guides for others, that show how to read and write to embedded data bases under perl. (the timing in the table of course does not reflect changing records; this is rare. I also have no idea how well or poorly the data bases scale.)

#!/usr/bin/perl -w use strict; use warnings FATAL => qw{ uninitialized }; ################################################################ my $numofwords= 50000; my @randomword; ################################################################ ## not really a data base ################################################################ sub memhash { my %randomword; for (my $i=0; $i<$numofwords; ++$i) { if (defined($randomword{$i})) { $randomword{ $randomword[$i] }.=",$i"; } else { $randomword{ $randomword[$i] }.="$i"; } } } ################################################################ ## the perl module ################################################################ sub dbmdeep { use DBM::Deep; my $db = DBM::Deep->new( "dbm.db" ); for my $i (0 .. $numofwords - 1) { my $rin= $randomword[$i]; if (defined $db->{$rin}) { $db->{$rin} .= ",$i"; } else { $db->{$r +in} = $i; } } } ################################################################ ## the standard berkeley data base ################################################################ sub berkeley { use DB_File; my $filename= "berkeley.db"; my %randomword; tie %randomword, "DB_File", $filename or die "cannot open $filename: $!\n"; for my $i (0 .. $numofwords - 1) { my $rin= $randomword[$i]; if (defined $randomword{$rin}) { $randomword{$rin} .= ",$i"; } else { $randomword{$rin} = $i; } } untie %randomword; } ################################################################ ## sql is treated differently. allows multiple keys, so we ## won't combine just yet, to give it an advantage. ################################################################ sub dbisqlite_nopragma { use DBI; my $dbfile = 'sqlite_nopragma.db'; # your database file system("rm -rf $dbfile"); my $dbh = DBI->connect( # connect to your database, create +if needed "dbi:SQLite:dbname=$dbfile", # DSN: dbi, driver, database + file "", # no user "", # no password { RaiseError => 1 }, # complain if something goes + wrong ) or die $DBI::errstr; $dbh->do( "CREATE TABLE allwords ( word, record )" ); # $dbh->do( "PRAGMA synchronous=OFF" ); for (my $i=0; $i<$numofwords; ++$i) { $dbh->do( "INSERT INTO allwords VALUES ( '$randomword[$i]', '$i' ) +" ); } $dbh->disconnect(); } ################################################################ sub dbisqlite_pragma { use DBI; my $dbfile = 'sqlite_pragma.db'; # your database file system("rm -rf $dbfile"); my $dbh = DBI->connect( # connect to your database, create +if needed "dbi:SQLite:dbname=$dbfile", # DSN: dbi, driver, database + file "", # no user "", # no password { RaiseError => 1 }, # complain if something goes + wrong ) or die $DBI::errstr; $dbh->do( "CREATE TABLE allwords ( word, record )" ); $dbh->do( "PRAGMA synchronous=OFF" ); for (my $i=0; $i<$numofwords; ++$i) { $dbh->do( "INSERT INTO allwords VALUES ( '$randomword[$i]', '$i' ) +" ); } $dbh->disconnect(); } ################################################################ for (my $i=0; $i<$numofwords; ++$i) { my @chars = ( "A" .. "Z", "a" .. "z", 0 .. 9, qw(! @ $ % ^ & *) ); my $rin = join("", @chars[ map { rand @chars } ( 1 .. 5 ) ]); push(@randomword, $rin); } print STDERR "[$0: created $numofwords words for testing]\n"; system("rm -rf *.db"); # kill all prio test files use Benchmark ':hireswallclock'; ## unfortunately, this fails under O +SX Mac Pro. always gives 0 secs. print time()."\t".localtime()."\n"; print "memhash\n"; Benchmark::timethis( 1, &memhash ); print time()."\t".localtime()."\n"; print "dbmdeep\n"; Benchmark::timethis( 1, &dbmdeep); print time()."\t".localtime()."\n"; print "berkeley\n"; Benchmark::timethis( 1, &berkeley); print time()."\t".localtime()."\n"; print "sqlite nopragma\n"; Benchmark::timethis( 1, &dbisqlite_nopragma); print time()."\t".localtime()."\n"; print "sqlite pragma\n"; Benchmark::timethis( 1, &dbisqlite_pragma); print time()."\t".localtime()."\n";

PS: Under OSX, the timer seems broken.

I thus tried to install DBM::Deep under ubuntu linux, but here I got

cpan[1]> install DBM::Deep CPAN: Storable loaded ok (v2.21) Going to read '/home/ivo/.cpan/Metadata' Database was generated on Tue, 26 Jan 2010 12:38:07 GMT CPAN: YAML loaded ok (v0.71) Going to read 122 yaml files from /home/ivo/.cpan/build/ CPAN: Time::HiRes loaded ok (v1.9719) DONE Restored the state of none (in 1.2153 secs) Running install for module 'DBM::Deep' Running Build for R/RK/RKINYON/DBM-Deep-1.0014.tar.gz Has already been unwrapped into directory /home/ivo/.cpan/build/DBM- +Deep-1.0014-7f188H Has already been made Running Build test * WARNING: Configuration was initially created with Module::Build version '0.280801' but we are now using version '0.3603'. If errors occur, you must re-run the Build.PL or Makefile.PL script. default not defined in test_types! at ./Build line 61 RKINYON/DBM-Deep-1.0014.tar.gz ./Build test -- NOT OK //hint// to see the cpan-testers results for installing this module, t +ry: reports RKINYON/DBM-Deep-1.0014.tar.gz Running Build install make test had returned bad status, won't install without force Failed during this command: RKINYON/DBM-Deep-1.0014.tar.gz : make_test NO
so I ended up resorting to timing it with printed seconds under OSX. not pretty.

Replies are listed 'Best First'.
Re^2: fast simple DB (sqlite?) skeleton?
by WizardOfUz (Friar) on Jan 27, 2010 at 09:59 UTC

    Your SQLite benchmark is seriously flawed. You really, really should prepare() your INSERT statement and wrap everything into a single transaction. And please, don't use "synchronous=OFF".

    #!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DB_File (); use DBI (); my $DB_FILE; my $DBH_SQLITE; my @CHARS = ( 'A' .. 'Z', 'a' .. 'z', 0 .. 9, qw( ! @ $ % ^ & * +) ); my $NUM_RECORDS = 50_000; setup_dbfile(); setup_sqlite(); Benchmark::cmpthese( 10, { 'berkeley' => \&benchmark_dbfile, 'sqlite' => \&benchmark_sqlite } ); untie %$DB_FILE; sub benchmark_dbfile { my $value = 0; while ( $value < $NUM_RECORDS ) { my $key = join( '', @CHARS[ map { rand @CHARS } ( 1 .. 5 ) ] ) +; if ( exists $DB_FILE->{$key} ) { $DB_FILE->{$key} .= ",$value"; } else { $DB_FILE->{$key} = $value; } $value++; } return; } sub benchmark_sqlite { my $sth = $DBH_SQLITE->prepare( 'INSERT INTO benchmark ( key, valu +e ) VALUES ( ?, ? )' ); $DBH_SQLITE->begin_work; my $value = 0; while ( $value < $NUM_RECORDS ) { my $key = join( '', @CHARS[ map { rand @CHARS } ( 1 .. 5 ) ] ) +; $sth->execute( $key, $value ); $value++; } $DBH_SQLITE->commit; return; } sub setup_sqlite { $DBH_SQLITE = DBI->connect( 'dbi:SQLite:dbname=benchmark.sqlite', '', '', { 'RaiseError' => 1 } ); if ( ! $DBH_SQLITE->tables( undef, undef, 'benchmark', 'TABLE' ) ) + { $DBH_SQLITE->do( 'CREATE TABLE benchmark ( key VARCHAR, value +INTEGER )' ); } return; } sub setup_dbfile { my %data; tie %data, 'DB_File', 'berkeley.db' or die "$!"; $DB_FILE = \%data; return; }

    Results:

    s/iter berkeley sqlite berkeley 4.59 -- -81% sqlite 0.877 423% --

    This is not surprising, because DB_File has to perform an extra "SELECT" for every "INSERT".

      thanks a lot, WizardOfUz.

      Let's presume that I wanted to mimick the code of my berkeley data base, though, in which I first look up whether the key already exists, and then I combine values into the same key if it is already there. in this case, was my original assessment of sqlite's speed correct? that is, I first need to insert every record, so that I can look up the next record.

      if this is true, then it seems to me that the berkeley data base would be a lot faster for situations in which "bulk inserts" are not possible, but sqlite is much faster for situations in which it is.

      I have not installed mysql, but I wonder if someone could tell me how fast/slow the same code in mysql would be.

        Here is an easy way to make the bulk-update for DB_File work a lot faster -- preprocess the input data before going near the database. What you can do is very dependent on the nature of your data, but you initial example implies you will always append to existing entries in the DB. So here is an example that show the gains that can be made by preprocessing - I've assumes there are 1000 unique keys in the database, and I'm adding 50k records.

        First, note the performance gain from preprocessing.

        s/iter original preprocess original 3.26 -- -93% preprocess 0.221 1375% --
        And here is the code
        #!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DB_File (); my $DB_FILE1; my $DB_FILE2; my $NUM_RECORDS = 50_000; my $NUM_KEYS = 1000; setup_dbfile(); Benchmark::cmpthese( 10, { 'original' => \&benchmark_dbfile1, 'preprocess' => \&benchmark_dbfile2 } ); sub benchmark_dbfile1 { foreach my $value ( 1 .. $NUM_RECORDS ) { my $key = int(rand($NUM_KEYS)); if ( exists $DB_FILE1->{$key} ) { $DB_FILE1->{$key} .= ",$value"; } else { $DB_FILE1->{$key} = $value; } } } sub benchmark_dbfile2 { my %preprocess = (); foreach my $value ( 1 .. $NUM_RECORDS ) { my $key = int(rand($NUM_KEYS)); push @{ $preprocess{$key} }, $value ; } while (my ($key, $val_list) = each %preprocess) { my $value = join ",", @$val_list; if ( exists $DB_FILE2->{$key} ) { $DB_FILE1->{$key} .= ",$value"; } else { $DB_FILE2->{$key} = $value; } } } sub setup_dbfile { { unlink 'berkeley.db1'; my %data; tie %data, 'DB_File', 'berkeley.db1' or die "$!"; $DB_FILE1 = \%data; } { unlink 'berkeley.db2'; my %data; tie %data, 'DB_File', 'berkeley.db2' or die "$!"; $DB_FILE2 = \%data; } }
        Another point to note about the sqllite example code is that is doing a straight insert, not an update. You original code, at the start of this thread, imples you want updates. So is a bulk insert a valid use-case for your application?