I am not sure if I understand correctly what the challenge is. Each child must return back results as a data chunk independent of any other child's. The run_on_finish() sub receives each child's data and, in my example, puts all children's data together in a hash keyed on child pid (see note below about that). Why? because I had assumed that you want to keep separate each child's results, that it is possible that child1 returns data with id=12 and so can child2. If that is not necessary, e.g. if each child returns results which can be added to a larger hash without any of the keys being the same over some children, then fine, it is not set on stone, just merge the children's returned hashes into a larger hash like so:
my %results = (); $pfm->run_on_finish( sub { my ($pid, $exit_code, $ident, $exit_signal, $core_dump, $data_stru +cture_reference) = @_; my $data = Sereal::Decoder::decode_sereal($$data_structure_referen +ce); # surely this is sequential code here so no need to lock %results, + right? @results{keys %$data} = values %$data; });
This will create a "flatter" hash without pid information but there is the risk of key clashes: if %child1 contains key id=12 and %child2 contains key id=12 (at the top level of their hashes), the new hash %results can contain, of course, only 1 value for it and that will be what is in the last child.
A nested hash is probably more efficient than a flat-out hash of 1 million items. At least as far as possible key collisions are concerned. Other Monks can correct me on that. In general, I would assume that hash with 1 million items is child's play for Perl.
Note on using PIDs as hash keys: Using children's pid as a hash key to collect each child's results is not a good idea because pid numbers can be re-cycled by the OS and two children at different times may get the same pid. A better idea is to assign each child its own unique id drawn from a pool of unique ids and given to the child at fork just like its input data.
Let me know if I got something wrong or you have more questions
bw, bliako
In reply to Re^3: Parallel::ForkManager and multiple datasets
by bliako
in thread Parallel::ForkManager and multiple datasets
by Speed_Freak
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |