in reply to Re^3: PerlIO file handle dup
in thread PerlIO file handle dup
It works great, and it is actually faster using MCE::Shared with chunked reading on uncompressed files rather than a dup'ed file handle and seeking to the correct position stored in a shared scalar. That includes using the Text::CSV_XS module even though I need to call it once for every record rather than a single call telling it to read 500 lines from a file handle (which I think would use getline 500 times). I don't see any improvement on output over dup'ed file handles with autoflush. The semaphores keeping the output in order already prevents concurrent writes, and would still be needed with MCE::Shared.
Specifying the size of the chunk in bytes does make more sense for memory management since the size of each record can vary greatly from one file to the next. I think mce_read would be more intuitive, since you don't expect the usage of a core function like read to change like that, but I understand it now. Thanks!
Maybe some other suggestions would be a mce_read that returns an array (or array reference) of records, since it is already great at reading a chunk of records. Just to save a split($/,$chunk). Also maybe a write that would take a chunk ID argument and keep the output chunks in the same sequence. Not sure if it would block until previous chunks are written (my script currently does), or buffer in memory and return before it is eventually written (could eat up memory if you read and process faster than you can write).
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: PerlIO file handle dup
by marioroy (Prior) on Mar 08, 2017 at 06:54 UTC |