i'm working on an online document access and document management system. the system needs to be able to handle a lot of queries to a database- as i am not querying the filesystem for this info - (sometimes files may not be there, etc .. mounted.. etc etc )
to the logged in user, they see what looks like to them like a filesystem hierarchy ( really a graphical representation of a filetree slice they have been granted some level of access to )
when a user is in a part of their allowed hierarchy, i query a db for info on the files we will be presently rendering.. the file name, size, filesystem it resides on, md5sum, etc etc.
the output to user may require again to render the same data of such files, and also, other users may request the same data on the same files.
so, i was thinking..
how about this..
what if i have a 'daemon' sort of script.. and it stores the most recent db request resutls in memory - so.. if a user requests to see info on file id 234, first we ask 'daemon' if say..
%{$main::RECENT_FILES{234}} is available ? and if not.. then 'daemon' eats it from db.. etc.. ,
i could store the 1000 most recently results, and results on other stuff. if data is older then x, then delete.. etc.
would this really give me some edge in doing queries for thousands of files, or should i instead perhaps simply maintain some sort of one db connection open for all users to query, something like that. as is, i am opening a connection for each user logged in.
it seems that keeping this data in memmory would be quicker. how would i go about doing this- where should i look, is oneof my questions- doing a sort of daemon thingie.. a perl script that stays alive and how on earth would i access it's symbol table/ namespace .. ?
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |