Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

Re: Fastest way to lookup a point in a set

by talexb (Chancellor)
on Aug 08, 2017 at 16:05 UTC ( [id://1197008]=note: print w/replies, xml ) Need Help??


in reply to Fastest way to lookup a point in a set

I'm late to this party, but did you try a database solution? I would give SQLite a shot.

Alex / talexb / Toronto

Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.

  • Comment on Re: Fastest way to lookup a point in a set

Replies are listed 'Best First'.
Re^2: Fastest way to lookup a point in a set
by marioroy (Prior) on Aug 08, 2017 at 20:11 UTC

    Update: Added SQLite_File for comparison.

    Hello talexb,

    The following is a demonstration comparing a plain hash against CDB_File, DB_File. and SQLite_File.

    use strict; use warnings; use feature 'state'; use CDB_File; use DB_File; use SQLite_File; use Time::HiRes 'time'; my @points = ( [ 0, 0 ], [ -1, -2 ], [ 1, 2 ], [ -1, 2 ], ... ); sub plain_hash { my %hash; $hash{ join(':',@{$_}) } = 1 for @points; \%hash; } sub cdb_hash { my %hash; # create CDB file my $cdb = CDB_File->new("t.cdb", "t.cdb.$$") or die "$!\n"; $cdb->insert( join(':',@{$_}), 1 ) for @points; $cdb->finish; # use CDB file tie %hash, 'CDB_File', "t.cdb" or die "$!\n"; \%hash; } sub db_hash { my %hash; # create DB file tie %hash, 'DB_File', "t.db", O_CREAT|O_RDWR, $DB_BTREE; $hash{ join(':',@{$_}) } = 1 for @points; untie %hash; # use DB file tie %hash, 'DB_File', "t.db", O_RDWR, $DB_BTREE; \%hash; } sub sql_hash { tie my %hash, 'SQLite_File', "sql.db"; $hash{ join(':',@{$_}) } = 1 for @points; \%hash; } sub look { my $cells = shift; state $points_str = [ map { join(':',@{$_}) } @points ]; for my $p (@{ $points_str }) { exists $cells->{$p} or die; } } sub bench { my ( $desc, $func ) = @_; my ( $cells, $iters ) = ( $func->(), 50000 ); my $start = time; look($cells) for 1..$iters; my $elapse = time - $start; printf "%s duration : %0.03f secs\n", $desc, $elapse; printf "%s lookups : %d / sec\n", $desc, @points * $iters / $elapse; } bench( "plain", \&plain_hash ); bench( " cdb", \&cdb_hash ); bench( " db", \&db_hash ); bench( " sql", \&sql_hash );

    Results: The native Perl on Mac OS X 10.11.6 is v5.18.2. The hardware is a Haswell i7 chip at 2.6 GHz.

    Regarding CDB_File and DB_File performance, these run better with Perl 5.20 and later releases.

    $ perl test.pl plain duration : 0.639 secs plain lookups : 10243321 / sec cdb duration : 5.692 secs cdb lookups : 1150711 / sec db duration : 9.746 secs db lookups : 672046 / sec $ /opt/perl-5.26.0/bin/perl test.pl plain duration : 0.517 secs plain lookups : 12677776 / sec cdb duration : 3.881 secs cdb lookups : 1687545 / sec db duration : 6.134 secs db lookups : 1067780 / sec sql duration : 184.338 secs sql lookups : 35532 / sec

    Regards, Mario

      Thanks for that .. I wonder what happens when the number of data points increases by a few orders of magnitude? That's where I would expect a database solution to start to overtake the hash-based solution.

      Alex / talexb / Toronto

      Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.

        I've run Perl's hashes up to 30 billion keys/2 terabytes (ram) and they are 1 to 2 orders of magnitude faster, and ~1/3rd the size of storing the same data (64-bit integers) in an sqlite memory-based DB. And the performance difference increases as the size grows.

        Part of the difference is that however fast the C/C++ DB code is, calling into it from Perl, adds a layer of unavoidable overhead that Perl's built-in hashes do not have.

        The second part is that indexing very large, sparse ranges of data can be done in one of two ways:

        1. Hashing.

          Perl's hashing and storage algorithms trade space for speed and are effectively optimal for speed: O(1) + amortised growth costs.

          The latter adds a small constant, and can be completely eliminated by pre-sizing.

        2. Pre-sort or binary tree -- which are effectively equivalent.

          Either mechanism is minimum: O(Log2 N) + DB transactions, journaling and other DB maintenance costs.

          Some, but not all of the extra costs can be disabled.

        In the end, O(1) trumps O(log N) and native trumps calling out to C.

        From compiled-to-native code -- whatever language -- it is possible to tailor solutions to the lookup domain that will out-perform Perl's native hashes. Judy arrays, radix trees, compressed bitmaps, simplified&tailored hash-arrays, but any of those mechanisms fall short once you add the overhead of calling out from Perl.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
        In the absence of evidence, opinion is indistinguishable from prejudice. Suck that fhit
Re^2: Fastest way to lookup a point in a set
by erix (Prior) on Aug 08, 2017 at 16:34 UTC

    did you try a database solution?

    I did. It was so spectactularly much slower that I didn't bother posting it (and I used postgres).

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1197008]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (4)
As of 2024-04-19 16:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found