The search engine structure I've come up with involves an index with a word,documentnumber,documentnumber structure on every line, like so:
alpha,0,1,2
bravo,2,3,4
charlie,3,4,5
(etc)
where the word "alpha" is in documents 0,1 and 2, the word "bravo" is in documents 2,3 and 4 and so on.
So the structure as it stands requires me to open a 500kb file with hundreds of words/lines in it, and process every line to see if it starts with the search term(s).
So it occured to me yesterday that it might actually be easier to have a structure where I actually had hundreds of files, where the keyword was the filename and the content was the list of documents, so rather than (pseudocode):
I could just do something like:open(large file) for each line of the file { put it into an array see if item 0 of the array matches the search term grab the document list if it does }
so am I crazy or what?if(a file exists called "/www/search/$searchterm"){ open it and grab the document list }
I guess the factors are:
It's obviously a very messy solution, in that I'd have a folder stuffed with a large number of very small files, but in terms of doing less file-reading, less I/O, are there any gains?
In reply to Poor Person's Database by Cody Pendant
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |