in reply to I need speed
It may seem counter intuitive, but I think you'll get a fair bang for your buck by opening an output pipe from the locate(1) program. It's designed to be efficient at answering these kinds of queries.
On the other hand, if you search for a given string "abc", it will output the name of the file even if "abc" is part of the pathname, not only the filename. But you could filter out the false hits by taking the basename of the returned file and checking it against your target string when you read it in.
# remember to untaint $str open LOC, "/usr/bin/locate $str |" or die "Cannot open input pipe: $!\ +n"; while( <LOC> ) { chomp; my $file = basename $_; next unless index($filename, $str) > -1; # output this filename } close LOC;
I'm not convinced a DBM hash will speed things up: at the end of the day you're still doing a linear scan of 200K records. (Assuming that since you're doing an index partial substrings of names are allowed).
Antother idea would be to simplify the location file contents, to reduce its size. Just store the name of each file. If and when a person hits it in a search, stat the file at that point in time (the results will be fresher to boot). This will let you store more files per disk block, thus less disk blocks to store all of them.
Depending on a certain ratio of CPU to disk I/O performance that only testing can demonstrate, it may be faster to zip the file and read and decompress it on the fly, on the assumption that the extra CPU cost is offset by reducing the number of disk blocks read.
|
|---|