The idea would probably work also for forked worker processes, but you would need to pass the $pid of the parent process as well as a fileno; since filehandles used by the same owner are writable by all processes of that owner.
The IO::Select loop in the main thread would be similar in setup, to a socket-watch program. As the data comes in to $select->can_read, it will read the data( preferably with sysread in huge chunks), and just copied to an output filehandle.
A few points the OP would have to watch are
1.Making sure the original huge file split dosn't split in the middle of a line, rendering a few records broken.
2. Making sure that IO::Select dosn't clog up and slowdown the output of some threads, by 1 overly aggressive thread outputting too much and hogging the Select object. One possible solution would be to use the largest filehandle buffers possible on the platform, so slower threads can keep outputting to the buffers, if one thread's output becomes very heavy.
The code should be fairly straightforward, and possibly someone as agile with thread code as you, could whip out some code quickly. For me, it would take all morning, and I prefer f'ing off. :-)
In reply to Re^5: how to split huge file reading into multiple threads
by zentara
in thread how to split huge file reading into multiple threads
by sagarika
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |