in reply to Re: IO::Select - is it right for this?
in thread IO::Select - is it right for this?

for ($i=0; $i<`#of children to do`; $i++;) { $pid[$i]= open($created_file_handle, "$custom_call_statement -|"); if ($pid[$i]) { waitpid($pid[$i], 0); } # parent waits for child to + finish } else { while (<$created_file_handle>) { $out[$i] .= $_; } # add outpu +t to the array exit; } }
Problem is, and it's the same when forking, that the parent can't see @out. Should I move the $out[$i] .= $_; to the parent process? The forked processes work fine, and fairly quickly, but I need to get the child to tell the parent what it found.

I'd rather not have to use the temp file solution, as it will get very messy when having multiple users run multiple processes at the same time. That's going to make this whole process run a lot slower. I'd rather see if I can find a solution that passes data faster back to the main process.

Replies are listed 'Best First'.
Re: Re: Re: IO::Select - is it right for this?
by Skeeve (Parson) on Oct 21, 2003 at 11:20 UTC
    To be honest... I really don't understand your code above. Sorry...
    But then... Here is a complete program that might help you. It forks 2 children and collects the children's output in an array.
    Be aware that there is no error handling (can't fork etc.) yet.
    use warnings; use strict; my $children= 2; # How many to fork my @out; my $first_child; $|=1; # unbuffered output to see the effect my $pid= open($first_child, '-|'); if ($pid) { # parent: Collects all output @out= <$first_child>; } else { # child forks again and produces output my(@child, @childpid); for (my $i=0; $i<$children; ++$i) { $childpid[$i]= open($child[$i], '|-'); if ($childpid[$i]) { # parent # does nothing yet } else { # child produces some output my $j=10; while ($j--) { print "I'm child $i and will tell this $j more time(s) +\n"; sleep 1; } exit; } } foreach (@child) { # closing all child handles close $_; } exit; } print @out;
      Actually, I've found a solution that works.
      I'll post it here later in the day, when I have some time to copy it from the other machine(no, they are not connected in any way).

      But thank you for the look.
      And for my next trick, I'm going to turn it into a duck, or something that uses shared memeory, whichever makes me laugh more.
        As promised, here is the code that works (well, sort of works anyway):
        for ($i=0; $i<$g; $i++) { pipe ($rh, $wh) # both of these are generated in the loop if ($pid[$i]=fork()) { # parent process waitpid($pid[$i], 0); # wait for child close ($wh); while (<$rh>) { # gimme the output if ($_ =~ m/^Error/) { push (@error, $_); } elsif ($_ =~ m^Hits/) { push (@hits, $_); } else { push (@data, $_); } # each is a result from an engine } } else { close ($rh); open (STDOUT, ">&$wh"); open (OUT, "| $caller"); # call the handler with arguments close (OUT); exit(0); } }


        This is part of a multi-headed search engine. The process is to go out to several search engines, poll them for results, and pass them back to the main program. The program is customized to allow from 10-50 results from each search engine. If I say get 10 or 20 from each engine, it works fine. But if I go for 30 or more, it chokes. Is it possible that I'm trying to pull too much through the pipe?

        Yes, I know there is stuff missing, but nothing that is relevant to the question at hand. All of that I can't share here.

        I should also point out that the idea is to get all the searches to run at the same time so hopefully the entire process will speed up.