in reply to I know this code could be better...
Your program works with two files for each run, one is the index and the other is the content. Under mod_perl the index, at least, should be held in RAM and you will be down to one file read per page. If you have enough RAM you should consider keeping both files in RAM. You could have a hash $file_content{$node} that stores a file per node as each file is read. This way the daemon would only read each file once.
To further speed up your code, eliminate as many print statements as possible. For example, change:
print $q->header('text/html'); print $q->start_html(-title=>$node, -author=>'schmidtd@co.delaware.pa.us', -BGCOLOR=>'white');
to
print $q->header('text/html'), $q->start_html(-title=>$node, -author=>'schmidtd@co.delaware.pa.us', -BGCOLOR=>'white');
This uses ',' to separate the arguments to print, rather that a '.' since the comma is faster than string concatenation.
The code goes to the trouble of reading the file into an array, and then printing it out one array element at a time. You don't need to do this. Instead, read the whole file into a single string and print the string.
If you don't want to follow my advice on not using an array, at least change
to something like:foreach(@lines){ print $_; }
or a similar statement using joinmy $content; foreach(@lines){ $content .= $_; } print $content;
print join "\n",@lines;
There are some other speedup tips at a previous node that I wrote on this topic.
It should work perfectly the first time! - toma
|
|---|