in reply to RE: RE: RE (tilly) 2 (blame): File reading efficiency and other surly remarks
in thread File reading efficiency and other surly remarks
I'd be very interested to see your results that show diffrently.
Cut, paste, copy Chatter.bat (33KB) to "file", run:
Benchmark: running BufferedFileHandle, chunk, linebyline, each for at +least 3 CPU seconds... BufferedFileHandle: 4 wallclock secs ( 3.46 usr + 0.00 sys = 3.46 C +PU) @ 386.13/s (n=1336) chunk: 4 wallclock secs ( 3.63 usr + 0.00 sys = 3.63 CPU) @ 31 +0.19/s (n=1126) linebyline: 4 wallclock secs ( 3.40 usr + 0.00 sys = 3.40 CPU) @ 43 +4.71/s (n=1478)
This shows that default line-by-line is the fastest (434/s), enlarged buffer line-by-line is the 2nd fastest (386/s), and chunk and split is the slowest (310/s).
Now append Chatter.bat to "file" until we have a 1GB file. Now we have buffered@15/s, line-by-line@13/s, chuck@9/s.
Find 85MB file: buffered@0.20/s, line-by-line@0.19/s, chunk@0.12/s.
I'd personally consider perl broken if it couldn't read a line at a time faster than I could in Perl code. Previous benchmarks have shown that Perl's overriding of stdio buffers can make perl's I/O faster than I/O in C programs using stdio. So I must be missing something about (at least) your copy of perl to understand why standard line-by-line isn't faster.
Update: I removed a pointless sentence that was probably snide. I apologize to those who already read it.
- tye (but my friends call me "Tye")
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
RE (tilly) 6 (bench): File reading efficiency and other surly remarks
by tilly (Archbishop) on Aug 26, 2000 at 21:35 UTC | |
|
RE: RE: RE: RE: RE (tilly) 2 (blame): File reading efficiency and other surly remarks
by lhoward (Vicar) on Aug 26, 2000 at 21:20 UTC | |
by tye (Sage) on Aug 26, 2000 at 21:30 UTC | |
by lhoward (Vicar) on Aug 26, 2000 at 23:33 UTC |