in reply to Re^2: Perl always reads in 4K chunks and writes in 1K chunks... Loads of IO!
in thread Perl always reads in 4K chunks and writes in 1K chunks... Loads of IO!
The problem is, it is quite likely that your ISP is measuring your IO in terms of bytes read and written rather than the number of reads and writes, so reducing the latter is unlikely to satisfy them.
Also, when you have read the entire file, there is no need to re-write the entire thing in order to add a new line. If you open the file for reading and writing, when you have read it, the file pointer will be perfectly placed to append any new line to the end. That will reduce your writes to 1 per new addition. If there is no new addition, they user is just refreshing, then you'll have no writes.
Also, you presumably do not redisplay the entire forum each time, but rather only the last 20 or so lines?
If this is so, then you should not bother to re-read the entire file each time, but rather use File::ReadBackwards to get just those lines you intend to display. If you do this, then you can use seekFH, 0, 2 to reposition the pointer to the eof and then append new lines without having to re-write the entire file each time.
Using this method, you can fix the total overhead per invocation to (say) 20 reads and 0 or 1 writes. You'll need to deploy locking, but from your code above you seem to be already familiar with that.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Perl always reads in 4K chunks and writes in 1K chunks... Loads of IO!
by NeilF (Sexton) on Jan 02, 2006 at 15:10 UTC | |
by BrowserUk (Patriarch) on Jan 02, 2006 at 15:44 UTC | |
by wfsp (Abbot) on Jan 02, 2006 at 16:08 UTC |