In this context, a record is one line; e.g. $/ = "\n". When the 3rd argument to read contains a suffix 'k' or 'm', then it slurps up (e.g. '4k') including till the end of line, not EOL. This read behavior applies to MCE::Shared::Handle only. When missing the suffix 'k' or 'm', read behaves exactly like the native read.
Yes, I had thought about adding readlines at the time. But, decided against it after writing the following.
my @lines = tied(*{$fh})->readlines(10);
In the end, I settled on having the file-handle specifics feel like native Perl and it does. The 'k' or 'm' suffix (extra behavior) provides chunk IO. Likewise, $. giving you chunk_id. One can get an extimate by "cat csv file | head -500 | wc". Take that and divide by 1024, append the k suffix to use with read. IMHO, there's no reason for workers to receive the same number of lines. Some will get a little less, some a little more.
A possibility that comes to mind is having MCE::Shared export "mce_read" to provide full MCE-like chunk IO capabilites. A value greater than 8192 means to read number of bytes including till the end of line. If doing so, the following will only work for handles constructed with mce_open.
# same as chunk_size => 1 in MCE $n_lines = mce_read $fh, \@lines, 1; # read max 500 lines $n_lines = mce_read $fh, \@lines, 500; # read 1m, including till the end of line $n_lines = mce_read $fh, \@lines, '1m'; # read 16k, ditto regarding till the end of line $n_lines = mce_read $fh, \@lines, '16k'; # same thing as above, but slurp into $buf $n_chars = mce_read $fh, $buf, 500; $n_chars = mce_read $fh, $buf, '1m'; $n_chars = mce_read $fh, $buf, '16k'; $. gives chunk_id
Regards, Mario.
In reply to Re^3: PerlIO file handle dup
by marioroy
in thread PerlIO file handle dup
by chris212
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |