Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Re^2: Bidirectional lookup algorithm? (Solution.)

by bitingduck (Chaplain)
on Jan 24, 2015 at 06:50 UTC ( [id://1114337]=note: print w/replies, xml ) Need Help??


in reply to Re: Bidirectional lookup algorithm? (Solution.)
in thread Bidirectional lookup algorithm? (Updated: further info.)

Perfect hashing/Minimal perfect hashing seems like it might both reduce the space requirement and speed up the lookups.

However, generating such functions is non-trivial if you start from scratch...

This may not be as horrible as it sounds if your code is going to get a lot of use. Minimal Perfect Hashing looks like the only way you're going to get O(1) on lookups, and if you're going to be getting bigger and bigger data sets, the time up front starts to look more cost effective. A bit of digging around found a real algorithm that someone actually implemented in C and tested, designed for use with both integer keys and character keys. Unfortunately I've only found the original paper and a later one with pseudocode, nothing quite canned.

The paper with pseudocode is here: Czech, Havas, Majewski, and an earlier paper that's a little denser to interpret is here Havas, Majewski. This page: Schnell gives a pretty readable interpretation of it as well. It doesn't look too painful to implement.

I also found what looks like a similar algorithm that appears to include bi-directionality implemented in a python package call pyDAWG that you could either call or port.

A little more digging around on the relevant names might find you some c code that does about the same

EDIT: allegedly there's a Java package floating around called "ggperf" that's an implementation of the CHM algorithm (and is supposed to be faster than a "gperf" minimal perfect hash generator that's part of libg++) but I couldn't find source, just a paper that amounts to documentation for ggperf

Replies are listed 'Best First'.
Re^3: Bidirectional lookup algorithm? (Solution.)
by BrowserUk (Patriarch) on Jan 24, 2015 at 13:15 UTC

    bitingduck thankyou for your research. The links provided for fascinating, (if at times somewhat bewildering :) reading.

    In particular the Schnell article made the CHM algorithm understandable; and the pyDAWG code is relatively concise and understandable.

    I think I understand the process (if not the implementation) enough to see a problem (for my application).

    Given that my datasets are generated at runtime, and the goal is to be able to make them as large as available memory will allow (but no larger), it would be necessary to run the CHM algorithm within the same process, and after the dataset has been generated.

    The problem is, that even using the most parsimonious implementation of a DAG, the combined size of the node and edge structures (with their required pointers) require substantially more memory per string-number pair than the data itself. Using the pyDAWG implementation as an example; each pairing requires two DAWGNodes at 18bytes each; and each word required one DAWGEdge struct per letter (say ave. 5*9 + 8*9 = 117bytes ).

    Thus a quick estimate is that for each 13 bytes of data (5-byte string + 8-byte integer); the algorithm would require a minimum of 117+38 = 155 bytes of structs * no_of_pairs, in order to build the DAG.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

      That's interesting how much extra space it takes to build the structure, and possibly part of why such algorithms aren't already coded up in a lot of libraries, given that the CHM algorithm dates back to 1992. I was also curious if the setup time would be too much - it wasn't clear that it really would do the generation in O(M+N) time with real data sets.

      It was interesting research-- I read through the whole thread and got curious why there aren't more canned packages that do minimal perfect hashing, given what it seems the value could be for some modern applications. I'm still not sure I completely understand why there aren't (though it's probably the memory cost vs. time cost). Then it kind of took me down a rabbit hole of interesting reading. My first inclination was to do something with trees and indexed lists of child nodes- it's more space intensive than a pair of hashes, but probably a little faster than a binary search.

        That's interesting how much extra space it takes to build the structure, and possibly part of why such algorithms aren't already coded up in a lot of libraries, given that the CHM algorithm dates back to 1992.

        The DAG is only needed whilst the algorithm is searching for the perfect hash function. Once the hash function has been discovered, the DAG is discarded and the data can be stored in a very simple hash structure; basically just an array indexed by the generated hash function.

        For many applications -- spell checkers and the like -- where the dictionary is known in advance, the hash function is generated in a separate process and the simple hash table generated and stored to disk. The applications that use the hashtable just load it up at startup and use the generated hash function to look things up in the table.

        It's because my application generates essentially random datasets at runtime, and a different dataset for each run, that makes CHM unsuitable for my application.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1114337]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (5)
As of 2024-03-29 10:57 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found