Since 5.14, it's 8k and configurable when Perl is built.
For my current project I need to read from up to 100 files concurrently.
I've demonstrated that on Windows, when reading a single file, using 64k reads works out to be most efficient. I've also proved to myself that when processing input from multiple files concurrently (interleaved), that using even bigger read sizes reduces the number of seeks between file positions and can give substantial gains.
Compile-time configuration doesn't really cut it. Would you use a module that required you to re-build Perl?
You could surely use tie to make a read use sysread.
Indeed, I've been hand-coding sliding buffers with adaptions to specific usages for years, but I thought I saw mention of a module that would allow all the usual line-oriented usage of filehandles, whilst sysread/syswriteing configurable sized chunks from/to disk.
I can write one, but writing a fully-fledged, all-singing/dancing generic module takes a lot of time and thought. I'm surprised it doesn't already exist.
In reply to Re^2: Configurable IO buffersize?
by BrowserUk
in thread Configurable IO buffersize?
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |