in reply to Re: Issue with communication of large data between Parent and child using Perl pipes
in thread Issue with communication of large data between Parent and child using Perl pipes

Thanks Ikegami. Your explanation of the concept that Pipes can hold only a limited number of bytes and how the deadlock is occurring made things clear. So now I made the parent read operation asynchronous by moving the read operation by the parent process BEFORE it checks to see if the child process has terminated. After it kills the child, I have made it read one last time again just in case child wrote something to the file handle PARENT_READ_HANDLE just before being killed.
After making these changes, the code runs without any issues now
else #parent process { # This is the section I added before checking for # child's status and it solved the problem while (<PARENT_READ_HANDLE>) { printf ("Parent received from the child : $_\n"); } # thIs section checks and makes sure child process if Terminated # if its hung for more than 30 seconds. $count =0; $childprocess = qx (ps -ef|grep -v defunct|grep -v grep |grep $ +pid); while (($childprocess ne "") && ($count < 6)) { $count+=1; printf ("Found child process still running count=$count\n") +; sleep (5); $childprocess = qx (ps -ef|grep -v defunct|grep -v grep |gr +ep $pid); } if ($childprocess ne "") { qx (kill -9 $pid); printf (STDERR "Child ssh process hung. So forcibly killed + it\n"); } waitpid($pid,0); # read the data printed by child while (<PARENT_READ_HANDLE>) { printf ("Parent received from the child : $_\n"); } }

As a note, there is documentation on opening pipes using "-|" at the below URL under the section "Safe Pipe Opens"
http://www.perl.com/doc/manual/html/pod/perlipc.html

thanks waavman
  • Comment on Re^2: Issue with communication of large data between Parent and child using Perl pipes
  • Download Code