in reply to Re^6: Do you really want to use an array there?
in thread Do you really want to use an array there?

Look plz in the above thread what i wrote...

The only reason why i want to use compression in my index is for perfomance reasons..that was my thougths untill now but it seems that i wasnt right..:(..since with the vec function i can decode 3000000 doc ids in 2 seconds and 10 milion in 6 secs!!!)

Lets forget the Elias code..and use only the vec , unpack V* and w* to benchmark..and finaly to found the winner...
  • Comment on Re^7: Do you really want to use an array there?

Replies are listed 'Best First'.
Re^8: Do you really want to use an array there?
by BrowserUk (Patriarch) on Apr 14, 2008 at 18:25 UTC

    Ok. Here you go:

    #! perl -slw use strict; use Benchmark qw[ cmpthese ]; our $packedV = pack 'V*', 1 .. 1e6; our $packedW = pack 'w*', 1 .. 1e6; our $packedVec = ''; vec( $packedVec, $_, 32 ) = $_ for 0 .. 1e6 - 1; cmpthese -5, { unpackV => q[ my @nums = unpack 'V*', $packedV; ], unpackW => q[ my @nums = unpack 'w*', $packedW; ], unVec => q[ my @nums; push @nums, vec( $packedVec, $_, 32 ) for 0 .. 1e6 - + 1 ], }; print "$_: ", length( do{ no strict; ${$_} } ) for qw[ packedV packedW packedVec ]; __END__ C:\test>junk0 Rate unVec unpackV unpackW unVec 1.87/s -- -29% -31% unpackV 2.64/s 41% -- -2% unpackW 2.70/s 44% 2% -- packedV: 4000000 packedW: 2983490 packedVec: 4000000

    unpack 'w' is 44 % faster than vec and compresses the data to 75% to boot. Which means less time to transfer from the DB.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Dont forget that the schema is like this:


      TERM1 -> Docid1PositionsDocid2Positions......


      so you have to separate the values as tachyon-II said by use the MSB as flag or somehow else if you like...

      Tachyon-II code:
      ########################### PACK with W* my $MSB = 1 << 31; my $tests = 250_000; my $DEBUG =1; print "Doing $tests tests\n"; my $pack = time(); my $str = ''; for my $doc_id( 1 .. $tests ) { #printf "Doc id: %d\n", $doc_id if $DEBUG; $str .= pack "w", $doc_id+$MSB; for my $pos ( 0 .. 2 ) { $str .= pack "w", $pos; } } printf "pack time %d\n", time()-$pack; printf "length %d\n", length $str; print "PAck w*'s size: " . size( $str) . " bytes\n"; my $unpack = time(); my $dat = {}; my $doc_id = undef; for my $int (unpack "w*", $str) { if ( $int > $MSB ) { $doc_id = $int - $MSB; #printf "\nDoc id: %d\t", $doc_id if $DEBUG; } else { push @{$dat->{$doc_id}}, $int; #print "$int\t" if $DEBUG; } } printf "\n\nunpack time %d\n",time()-$unpack;

      In while you will have my own benchmarks too...
        Here is what i 've done..i change the schema by adding the TF(term frequency) for each term in each document like below:

        Term1->DocId1TFposDocId2TFposDocId2TFpos......

        I add the tf so to distinguish the doc ids and the positions..i suppose there is other way to distinguish the docs and the pos without to add more info but until now i didnt find it...

        Here i save in the bit string 500000 documents where for each doc i keep the Tf and 4 positions...
        use strict; use Devel::Size qw(size); my $df=500000;my $tf=3; my $wektor = ''; my $packed = ''; my $nr=0; for(0 .. $df) { vec ($wektor, $nr++, 32) = $_; # DOC ID...... vec ($wektor, $nr++, 32) = $tf; # TF...... for(0 .. $tf) { vec ($wektor, $nr++, 32) = $_+10; # POSITIONS } } print "Vector's size: " . size( $wektor ) . " bytes\n"; #print $nr,"\n"; ###################### UNPACK VECTOR2..... my %vec; my $docID=0; my $tf=0; my $index=0; my $Aa=time(); for(0 .. $df) { $docID = vec ($wektor, $index++, 32); $tf = vec ($wektor, $index++, 32); $vec{$docID}=$tf; # print "Doc id: $docID\ttf: $tf\n"; for(0 .. $tf) { # print "\t\tpositions: ",vec ($wektor, $index++, 32),"\n"; vec ($wektor, $index++, 32); } } print "unpack vector in \t",time()-$Aa," secs...(oh Yeah!!!)\n"; Vector's size: 12000052 bytes unpack vector in 4 secs...(oh Yeah!!!)

        As you can see from the code i save only the docId and the Tf in a hash without saving the positions..i am trying to find the appropriate structure to keep all this info..

        Thats all for now ...i hope you find something faster....