in reply to Re^3: fast simple DB (sqlite?) skeleton?
in thread fast simple DB (sqlite?) skeleton?

Here is an easy way to make the bulk-update for DB_File work a lot faster -- preprocess the input data before going near the database. What you can do is very dependent on the nature of your data, but you initial example implies you will always append to existing entries in the DB. So here is an example that show the gains that can be made by preprocessing - I've assumes there are 1000 unique keys in the database, and I'm adding 50k records.

First, note the performance gain from preprocessing.

s/iter original preprocess original 3.26 -- -93% preprocess 0.221 1375% --
And here is the code
#!/usr/bin/perl use strict; use warnings; use Benchmark ':hireswallclock'; use DB_File (); my $DB_FILE1; my $DB_FILE2; my $NUM_RECORDS = 50_000; my $NUM_KEYS = 1000; setup_dbfile(); Benchmark::cmpthese( 10, { 'original' => \&benchmark_dbfile1, 'preprocess' => \&benchmark_dbfile2 } ); sub benchmark_dbfile1 { foreach my $value ( 1 .. $NUM_RECORDS ) { my $key = int(rand($NUM_KEYS)); if ( exists $DB_FILE1->{$key} ) { $DB_FILE1->{$key} .= ",$value"; } else { $DB_FILE1->{$key} = $value; } } } sub benchmark_dbfile2 { my %preprocess = (); foreach my $value ( 1 .. $NUM_RECORDS ) { my $key = int(rand($NUM_KEYS)); push @{ $preprocess{$key} }, $value ; } while (my ($key, $val_list) = each %preprocess) { my $value = join ",", @$val_list; if ( exists $DB_FILE2->{$key} ) { $DB_FILE1->{$key} .= ",$value"; } else { $DB_FILE2->{$key} = $value; } } } sub setup_dbfile { { unlink 'berkeley.db1'; my %data; tie %data, 'DB_File', 'berkeley.db1' or die "$!"; $DB_FILE1 = \%data; } { unlink 'berkeley.db2'; my %data; tie %data, 'DB_File', 'berkeley.db2' or die "$!"; $DB_FILE2 = \%data; } }

Replies are listed 'Best First'.
Re^5: fast simple DB (sqlite?) skeleton?
by WizardOfUz (Friar) on Jan 28, 2010 at 13:11 UTC

    Using the original key generation logic, which produces hardly any key collisions, I get the following results:

    s/iter preprocess original preprocess 4.30 -- -4% original 4.11 5% --
      That is hardly surprising - the test harness is carrying out more db writes plus the extra cost of preprocessing that it will never use.

      We would need to know more about the input data (like the % of key colisions) to take this any further.