Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
 
PerlMonks  

Re^2: Help on building a search engine (design / algorithms / parsing / tokenising / database design)

by bobtfish (Scribe)
on Jun 07, 2004 at 12:35 UTC ( #361955=note: print w/replies, xml ) Need Help??


in reply to Re: Help on building a search engine (design / algorithms / parsing / tokenising / database design)
in thread Help on building a search engine (design / algorithms / parsing / tokenising / database design)

Yes. That's what my code does.

It builds an inverted index of all the content I want to search and then queries that.

It's the searching this index that is the part I need to optimise.

Thanks for the links, however I can't find anything helpfull and low level enough that doesn't only cover the problems I've already solved.. (TBH, I can't find anything with actual code / alogrithms that isn't a back of cigarette packet style demonstration.. My code can already do complex and/or/not searches with arbitary nesting using () in the search and any number of search terms.)
  • Comment on Re^2: Help on building a search engine (design / algorithms / parsing / tokenising / database design)

Replies are listed 'Best First'.
Re^3: Help on building a search engine (design / algorithms / parsing / tokenising / database design)
by inman (Curate) on Jun 07, 2004 at 13:45 UTC
    Try Perlfect Search. A full search engine implemented in Perl so you get the source code and everything!
Re^3: Help on building a search engine (design / algorithms / parsing / tokenising / database design)
by BrowserUk (Patriarch) on Jun 07, 2004 at 19:03 UTC

    If you have the space, one very effective way of speeding up the searching of your inverted index is to index it!

    Once you have created your inverted index, you then create a second index from the first. This indexes pairs of words. The keys are pairs of words from your primary index. The values are the pages that contain the pairings. This vastly reduces the number of pages associated with each key. The cost is the huge number of keys.

    A partial solution is to only pair unusual (low hit count words) with common (high hit count words) once you have excluded all the really common words ('a', 'the', 'it' etc.).

    If the search doesn't include any uncommon words, the secondary index doesn't help, but you find that out very quickly, and there is no alternative than going through all the hits.

    If the search consists of only uncommon words, then the results from the primary index will be minimal anyway.

    But when the search incldes one or more common and one or more uncommon, the process of intersecting the huge list from the common and the small list of uncommon at runtime is expensive. Pre-processing these can substantially reduce the runtime costs.

    It's fairly easy to setup but requires a substantial amount of (pre-)processing power to maintain.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://361955]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (7)
As of 2022-05-25 10:43 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Do you prefer to work remotely?



    Results (90 votes). Check out past polls.

    Notices?