in reply to Fastest I/O possible?

If you are going to be processing megabytes of data or more then it is probably a good idea to buffer chunks of it into RAM as it is *much* faster than processing it line by line using IO. If you plan buffering then I recommend something like this
{ # NOTE: code is untested my @chunks = (); local $/ = \102400; while(<$fh>) { my $chunk = $_; my $last_rs = rindex($chunk, $/) push @chunks, substr($chunk, 0, $last_rs); seek($fh, 1, -(length($chunk) - $last_rs)); } }
This should read up to 100k chunks at a time but also making sure the chunk ends on a new line. As far as I'm aware there are no buffering modules like you describe (at least a quick CPAN search doesn't seem to turn anything up) so perhaps it's time to write one?
HTH

_________
broquaint

Replies are listed 'Best First'.
Re: Re: Fastest I/O possible?
by sauoq (Abbot) on Aug 23, 2002 at 01:47 UTC

    I don't think this will help that much. There is buffering going on under the surface with stdio anyway and it should pick a good blocksize based on the device it is reading from.

    The optimimum blocksize is returned by stat as Zaxo pointed out in a recent post. You could use an approach similar to this one (although the seeks are a really bad idea) and choose a multiple of that blocksize if your lines are usually bigger than it is but you would incur the overhead of breaking it up into lines. You are probably better off leaving that to the lower level routines.

    -sauoq
    "My two cents aren't worth a dime.";