actually, I'm not "stuck" with a flat text file, but I cant find a really good way to split things up... for instance, if the user searches for "boy", making a file called "boy.resutls" with that info, would not work becuase some of the results from a "boy" search also come in a "girl" search, and a "child" search. But not all of them. If I did that I would have to open and search all the files anyway, defeateing the whoe purpose of splitting them up.
If, as the results came in, I put them in seperate files based on the first digit, like a "1" file, a "2" file a "3" file, and then opened each file as it is needed in future searhces, would this be faster do you think? Is is faster to open one file with all entries and search it, or open various files with less entries and search each one individually as needed?
Would it be faster to put them in Mysql and then for each one do a search like --"SELECT * FROM okphotos WHERE id LIKE '$id';-- for each result?
I am trying all these as we speak, but your suggestions will save me from trying something in vain.. Thanks.
kbeen. | [reply] |
mySQL could/would cache that data in memory if the table is used often, however you lose speed if the MySQL server is on a different machine than the script. If you have a select for each ID you gain speed again though, specially when the cache becames larger. I think a SQL server is a good way to go.
Don't use LIKE though, use =, although LIKE without % and ? might be optimized that way by the server... dunno.
use numeric fields if possible, if not, then create a index on a substring of the id. for example,
id char(10), Update: don't use varchar
...
index index_id id(3)
Tiago
| [reply] |