Personally, I would make the program take a byte offset, not a line number. That way, you can save the scanning of the line numbers to do an "optimal" distribution and instead seek to the next line after the "start" line, and stop after moving the file position after an end line. I'd use a modified runN to do the parallelization - it will simply spawn four copies of your program and give it the start and stop parameters:
# line processing program my ($file,$start,$stop) = @ARGV; $start ||= 0; $stop ||= -s $file; open my $fh, '<', $file; seek $fh, $start, 0; # now position at the first line <i>after</i> $start # if we're not starting at the beginning of a file if ($start) { <$fh>; }; my $position = tell $fh; while (<$fh> and $position <= $end) { $position += length $_; ... };
You could also use tell every trip around the main loop, but I found tell to be slowing down some of my IO loops. On the other hand, if IO is slowing you down, fork and all the CPUs in the world won't make that faster.
In reply to Re: fork IO
by Corion
in thread fork IO
by baxy77bax
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |