which is comparable, on my machine, to chunky.sub IRS_chunky { my($buf, $leftover, @lines); local $/ = \10240; open(FILE, "totalfoo"); while( $buf = <FILE> ) { $buf = $leftover.$buf; @lines = split(/\n/, $buf); $leftover = ($buf !~ /\n$/) ? pop @lines : ""; foreach (@lines) { } } close(FILE); }
My real question, though, is what kind of machines are you running this on? My benchmark results are completely different than yours. Having run this three times on a Sun Ultra 5 on a file that is approx 500,000 lines and 24Mb in size from my local IDE drive, my results were pretty consistently like this
The code I used was identical to the earlier post, with the subroutine I wrote added. Using perl-5.6 generated the same basic results, plus or minus 1 for each stat. I am now somewhat confused. Is this a difference in the way Solaris uses its buffers? What platform/OS were the original tests run on?perl test_read.pl Benchmark: timing 10 iterations of Chunky IRS, chunk, linebyline... Chunky IRS: 41 wallclock secs (31.31 usr + 3.71 sys = 35.02 CPU) chunk: 40 wallclock secs (31.00 usr + 3.81 sys = 34.81 CPU) linebyline: 27 wallclock secs (17.67 usr + 2.47 sys = 20.14 CPU)
In reply to RE: Re: read X number of lines?
by mikfire
in thread read X number of lines?
by eduardo
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |