in reply to Re^5: Byte allign compression in Perl..
in thread Byte allign compression in Perl..

I added W-BER to my benchmark. It just about beats binary ('v*') on space. About the same for unpacking but slower for packing.

Any way you slice it, the single best option is to restructure the schema to cause less data to need retrieval...but then you know that :)

## Note this run uses 1000 documents rather than 5000. ## The relative performance remains the same though. Run with 15000 unique words in 1000 documents (Ave: 554 words/doc) ASCII uses 4755001 bytes W-BER uses 3196726 bytes Binary uses 3279128 bytes Elias uses 4118678 bytes 1 trial of Packing ascii (9.078s total) 1 trial of Unpacking ascii (2.942s total) 1 trial of Packing W-BER (16.144s total) 1 trial of Unpacking W-BER (8.219s total) 1 trial of Packing binary (9.504s total) 1 trial of Unpacking binary (8.223s total) 1 trial of Packing Elias (11.176s total) 1 trial of Unpacking Elias (9.750s total)

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."

Replies are listed 'Best First'.
Re^7: Byte allign compression in Perl..
by tachyon-II (Chaplain) on Apr 12, 2008 at 06:23 UTC

    As far as I can see the intersection problem is On*Ologn which make scaling it problematic. There appear to be spcial cases where this can be reduced to On which allow you to deal with an infinitely large data set in finite time provided you have the horsepower.

    It was interesting to see how bad subst was in the code above as compared to a straight unpack (~ 40x slower). Oh for a world full of fun problems to solve. Anyway back to the day to day drudgery ;-)

      As far as I can see the intersection problem is On*Ologn which make scaling it problematic.

      How does 312ms sound for a two word query in a DB containing 15,000 words in 5000 documents (ave:554 words/doc; 2.7million word-document pairs?

      678848=# select 678848-# "wordFromId"( "word-id" ), 678848-# "docnameFromId"( "doc-id" ), 678848-# posns 678848-# from "word-doc" 678848-# where 678848-# "word-id" in ( 678848(# "idFromWord"( 'aachen' ), 678848(# "idFromWord"( 'zwitterions' ) 678848(# ) 678848-# and 678848-# "doc-id" in ( 678848(# select "docIdsContaining"( "idFromWord"( 'aach +en' ) ) 678848(# intersect 678848(# select "docIdsContaining"( "idFromWord"( 'zwit +terions' ) ) 678848(# ) 678848-# order by 678848-# "word-id", "doc-id" 678848-# ; wordFromId | docnameFromId | posns -------------+---------------+------- aachen | document_223 | 1723 aachen | document_940 | 1903 aachen | document_1778 | 273 aachen | document_1897 | 3163 aachen | document_3990 | 817 aachen | document_4407 | 4736 zwitterions | document_223 | 3072 zwitterions | document_940 | 3504 zwitterions | document_1778 | 664 zwitterions | document_1897 | 4186 zwitterions | document_3990 | 355 zwitterions | document_4407 | 1459 (12 rows) 678848=# select count( * ) from "word-doc";; count --------- 2719282 (1 row)

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Sound good enough for government work (in fact you beat Google) and certainly a practical working solution to the problem as stated. It does sound as though the OP has a bigger data set in mind though.

        One point he does not seem to get is how the need to do this is reduced by caching as each time you had a search for "Some term" you would cache the first n results so you don't have to repeat the expensive task again. You can see Google do this if you search for 'aachen zwitterions'. The first search took 0.35 seconds to retrieve 514 results but the second search took only 0.07 seconds. I tried this on google.com.au and google.com so you will need to try another obscure pair or a different server farm to see it. What amazes me is the raw speed but then again there are probably several thousand machines dealing with the query.

        Intruiging dataset you have there! I won't ask.