in reply to Re: fast simple DB (sqlite?) skeleton?
in thread fast simple DB (sqlite?) skeleton?

Your SQLite benchmark is seriously flawed. You really, really should prepare() your INSERT statement and wrap everything into a single transaction. And please, don't use "synchronous=OFF".

#!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DB_File (); use DBI (); my $DB_FILE; my $DBH_SQLITE; my @CHARS = ( 'A' .. 'Z', 'a' .. 'z', 0 .. 9, qw( ! @ $ % ^ & * +) ); my $NUM_RECORDS = 50_000; setup_dbfile(); setup_sqlite(); Benchmark::cmpthese( 10, { 'berkeley' => \&benchmark_dbfile, 'sqlite' => \&benchmark_sqlite } ); untie %$DB_FILE; sub benchmark_dbfile { my $value = 0; while ( $value < $NUM_RECORDS ) { my $key = join( '', @CHARS[ map { rand @CHARS } ( 1 .. 5 ) ] ) +; if ( exists $DB_FILE->{$key} ) { $DB_FILE->{$key} .= ",$value"; } else { $DB_FILE->{$key} = $value; } $value++; } return; } sub benchmark_sqlite { my $sth = $DBH_SQLITE->prepare( 'INSERT INTO benchmark ( key, valu +e ) VALUES ( ?, ? )' ); $DBH_SQLITE->begin_work; my $value = 0; while ( $value < $NUM_RECORDS ) { my $key = join( '', @CHARS[ map { rand @CHARS } ( 1 .. 5 ) ] ) +; $sth->execute( $key, $value ); $value++; } $DBH_SQLITE->commit; return; } sub setup_sqlite { $DBH_SQLITE = DBI->connect( 'dbi:SQLite:dbname=benchmark.sqlite', '', '', { 'RaiseError' => 1 } ); if ( ! $DBH_SQLITE->tables( undef, undef, 'benchmark', 'TABLE' ) ) + { $DBH_SQLITE->do( 'CREATE TABLE benchmark ( key VARCHAR, value +INTEGER )' ); } return; } sub setup_dbfile { my %data; tie %data, 'DB_File', 'berkeley.db' or die "$!"; $DB_FILE = \%data; return; }

Results:

s/iter berkeley sqlite berkeley 4.59 -- -81% sqlite 0.877 423% --

This is not surprising, because DB_File has to perform an extra "SELECT" for every "INSERT".

Replies are listed 'Best First'.
Re^3: fast simple DB (sqlite?) skeleton?
by iaw4 (Monk) on Jan 27, 2010 at 13:54 UTC
    thanks a lot, WizardOfUz.

    Let's presume that I wanted to mimick the code of my berkeley data base, though, in which I first look up whether the key already exists, and then I combine values into the same key if it is already there. in this case, was my original assessment of sqlite's speed correct? that is, I first need to insert every record, so that I can look up the next record.

    if this is true, then it seems to me that the berkeley data base would be a lot faster for situations in which "bulk inserts" are not possible, but sqlite is much faster for situations in which it is.

    I have not installed mysql, but I wonder if someone could tell me how fast/slow the same code in mysql would be.

      Here is an easy way to make the bulk-update for DB_File work a lot faster -- preprocess the input data before going near the database. What you can do is very dependent on the nature of your data, but you initial example implies you will always append to existing entries in the DB. So here is an example that show the gains that can be made by preprocessing - I've assumes there are 1000 unique keys in the database, and I'm adding 50k records.

      First, note the performance gain from preprocessing.

      s/iter original preprocess original 3.26 -- -93% preprocess 0.221 1375% --
      And here is the code
      #!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DB_File (); my $DB_FILE1; my $DB_FILE2; my $NUM_RECORDS = 50_000; my $NUM_KEYS = 1000; setup_dbfile(); Benchmark::cmpthese( 10, { 'original' => \&benchmark_dbfile1, 'preprocess' => \&benchmark_dbfile2 } ); sub benchmark_dbfile1 { foreach my $value ( 1 .. $NUM_RECORDS ) { my $key = int(rand($NUM_KEYS)); if ( exists $DB_FILE1->{$key} ) { $DB_FILE1->{$key} .= ",$value"; } else { $DB_FILE1->{$key} = $value; } } } sub benchmark_dbfile2 { my %preprocess = (); foreach my $value ( 1 .. $NUM_RECORDS ) { my $key = int(rand($NUM_KEYS)); push @{ $preprocess{$key} }, $value ; } while (my ($key, $val_list) = each %preprocess) { my $value = join ",", @$val_list; if ( exists $DB_FILE2->{$key} ) { $DB_FILE1->{$key} .= ",$value"; } else { $DB_FILE2->{$key} = $value; } } } sub setup_dbfile { { unlink 'berkeley.db1'; my %data; tie %data, 'DB_File', 'berkeley.db1' or die "$!"; $DB_FILE1 = \%data; } { unlink 'berkeley.db2'; my %data; tie %data, 'DB_File', 'berkeley.db2' or die "$!"; $DB_FILE2 = \%data; } }

        Using the original key generation logic, which produces hardly any key collisions, I get the following results:

        s/iter preprocess original preprocess 4.30 -- -4% original 4.11 5% --
      Another point to note about the sqllite example code is that is doing a straight insert, not an update. You original code, at the start of this thread, imples you want updates. So is a bulk insert a valid use-case for your application?