in reply to Perl always reads in 4K chunks and writes in 1K chunks... Loads of IO!
You can slurp the file in one read and split it yourself:
#! perl -slw use strict; my $file = 'test.txt'; open DF, '<:raw', $file or die "$file : $!"; my @test = split "\n", do{ local $/ = \ -s( $file ); <DF> }; close DF;
However, if you are reading this file frequently, (like every time a web page is hit as suggested by your example), then you are probably worrying about the wrong thing. After the first time the file is read, it will be cached in the file system cache, so the second and subsequent times you read it, the 4k reads will be coming from cache. You can demonstrate this to yourself if you have a disk activity led on your machine. Run the above script and you should see the disk hit for a sustained period the first time. The second time and subsequent runs you may see a brief access but no sustained hit.
Equally, whilst you may see many 1K calls to the system write api, these will frequently be cached in ram and written to disk asynchronously as the demands on the cache dictate. For example, the system may decide to write chunks out when the disk head is in approximately the correct position following disk activity by other processes. If you attempt to optimise the writing by your process, you could interfere with the dynamics of the overall system which could actually result in slower throughput. The very best way to ensure optimal IO for your process and throughput by the entire system is to increase the proportion of your ram that is devoted to the system cache.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Perl always reads in 4K chunks and writes in 1K chunks... Loads of IO!
by NeilF (Sexton) on Jan 01, 2006 at 19:55 UTC | |
by BrowserUk (Patriarch) on Jan 01, 2006 at 23:35 UTC | |
by NeilF (Sexton) on Jan 02, 2006 at 15:10 UTC | |
by BrowserUk (Patriarch) on Jan 02, 2006 at 15:44 UTC | |
by wfsp (Abbot) on Jan 02, 2006 at 16:08 UTC | |
by BrowserUk (Patriarch) on Jan 02, 2006 at 01:51 UTC |