Perhaps you want to build a spider that sniffs a site around and around. It is possible, in a recursive way. You parse the first page you get (for ex. index.html or what you get at the starting URL), indexing the keywords, or finding the keyword, then looking for other URL's in the page, and parse them too, until you dont' find more URL's or they are pointing to an external location.
I would recommend to store keywords for every page in your local database, and provide the hits from there, and run your spider(s) in a timely basis.
Good luck!
Update: on part 2 this is discussed more detailed. I am just reading that now :)
--
tune