sweetblood has asked for the wisdom of the Perl Monks concerning the following question:

I need to have a script create several instances of an app and then wait until all the instances have completed. Each instance is run with its own set of data, so they may not all finish at the same time. I need to wait until all of my instances have finished to continue my process so all the data are complete before acting on them. My original thought was to use open2 to open each instance of the program and pass its stderr and stdout to a file (2>&1) ignoring the file handle open2 creates. Each instance will run in it's own directory and the app creates it's files in the current durectory. My thought was that as long as I don't use waitpid() I can launch all of them without hanging. Then I would watch /proc/$pid/cwd and as long as it matches my cwd I know the process is still running. As soon as
/proc/$pid/cdw ne $curdir
I know my child is no longer running. When all of my children are complete I can then move on collecting my data and finishing the process. I coded up a test and it seems to work as I expect. However I did not use my actual app to test as it runs many hours, so the app I launched was a very trivial perl script. As I said this worked fine but before I code this into my actual process and use it to launch many instances of this non-trivial app that takes hours to run I thought I'd read IPC::Open2 and make sure I wouldn't have any trouble , but I became nervious and thought perhaps there is a better way. The process that I'm adding this to is one that currently takes all the data and passes it to a single instance of the app and therefore does not take advantage of the systems 4 cpus. I thought perhaps of using forking but I've never done that before and was unsure of how I would make it work.

Any suggestions would be greatly appreciated.

Sweetblood

Replies are listed 'Best First'.
Re: creating multiple instances of a process
by Zaxo (Archbishop) on Jun 16, 2005 at 18:48 UTC

    The wait function will take care of your needs. Here's how I like to organize it,

    my %kid; for (0 .. $num) { defined(my $cpid = fork) or warn $! and next; $kid{$cpid} = undef, next if $cpid; # do kid stuff exit 0; } delete $kid{wait()} while %kid;

    After Compline,
    Zaxo

      Sorry for the delay in responding, but I've been banging my head against this for awhile. I've tried what I believe'd is what you were telling me, but I'm a little dense. The system I'm on is redhat if that helps. Here is my ammended code after your suggestions, let me know what you think.
      #!/usr/bin/perl -w use strict; chomp (my $curdir = `pwd`); my $proc = '../delme.pl > proc.log 2>error.log'; my %kid; for (1 .. 4) { chdir $curdir; do { mkdir "test$_" or die $! } unless (-d "test$_"); chdir "test$_" or die $!; print "Launching: Process $_$/"; defined(my $cpid = fork) or warn $! and next; $kid{$cpid} = undef, next if %kid; exec $proc; exit 0; } delete $kid{wait()} while %kid; print "Done...$/";
      I get no errors but only one instance runs. I need four independant instances running and as you can see by code they should each run in their own directory. The delme.pl is trivial, essentially just printing a line to STDOUT as well as STDERR, then sleeping for 25 seconds.

      Thanks for your help!

      Sweetblood

        You have a typo in line 14 which is making both the parent and child run the child code and exit on the first time through the loop. Change . . . if %kid; to . . . if $cpid;.

        After Compline,
        Zaxo