awohld has asked for the wisdom of the Perl Monks concerning the following question:

I have a script that runs two external scripts and one binary file. Each script runs one at a time and before the next one starts, the previous one needs to finish.

I wrote this but it all runs at the same time.

How do I get the suceeding programs to wait for the previous program to finish running before executing?
#!/usr/bin/perl -w use strict; my $pid; my $getSite = "/www/cgi-bin/getSite.pl"; my $createCSV = "/www/cgi-bin/createCSV.pl"; my $createXML = "/www/cgi-bin/csv2xml"; my $siteCSV = "/www/cgi-bin/site.CSV"; my $siteXML = "/www/cgi-bin/site.XML"; unless ($pid = fork) { unless (fork) { exec "$getSite"; die "Couldn't run getSite.pl"; exit 0; } exit 0; } waitpid($pid,0); unless ($pid = fork) { unless (fork) { exec "$createCSV"; die "Couldn't run createCSV.pl"; exit 0; } exit 0; } waitpid($pid,0); unless ($pid = fork) { unless (fork) { exec "$createXML $siteCSV $siteXML"; die "Couldn't run create XML File"; exit 0; } exit 0; } waitpid($pid,0);

Replies are listed 'Best First'.
Re: Waiting for an External Program to Finish Executing
by Aristotle (Chancellor) on Nov 11, 2005 at 05:54 UTC

    Uh, how about system?

    system $getSite; system $createCSV; system $createXML, $siteCSV, $siteXML;

    Also, please don’t quote lone variables. "$getSite" is subtly different from $getSite, never necessary (except when it is) so it may bite you, and just adds noise. Perl is not shell, so don’t do that.

    That said – why put this into a Perl script instead of a shell script?

    Makeshifts last the longest.

Re: Waiting for an External Program to Finish Executing
by Moron (Curate) on Nov 11, 2005 at 10:38 UTC
    The general definition of a fork is that a thread spawns a process that begins execution from the same program location as its parent, while the parent then continues in parallel.

    Your code calls the perl fork function six times, three of which are controlled by returning the pid and issuing a waitpid to hold up parent execution until completion of the child. But not only will these children run through to the end of the code unchecked, but the three uncontrolled forks will triplicate everything from where they are placed - I don't dare to test (or rather inflict) your code on my machines under these circumstances, but if you were to place a sleep 60 at the end of the code and then repeatedly examine what is running using the ps command, I wager you will see far more parallel processes being spawned than you imagine should take place even after reading the above definition.

    To fix this code in the simplest way:

    1) yes, as already replied, replace all exec with system BUT

    2) also remove all "unless fork" lines reducing the code to a single non-parallel process.

    -M

    Free your mind

      Are you sure? It looks to me like all of the children either exec or die, which will stop them from executing any more code than they're intended to.
        If you look at the manual, you'll see that fork either returns the pid (and spawns a child) or undef if the fork was unsuccessful causing the code after the unless to finally execute, i.e. whenever the system is being sufficiently thrashed to get an undef() back from fork. Note also the conditions under which the manual states that zombies will be accumulated. The OP code meets the criteria for a zombie factory of von Neumann proportions. In fact the intended functionality only gets executed whenever there is a spawn error and sufficient spawn errors are generated to cause "unless" to function precisely because this code is thrashing the machine enough to cause such errors in turn enough to cause the branches with execs to be executed approximately in parallel, because the spawn error cirmsumstances will occur simultaneously of course. Furthermore, the OPs actual symptoms are explained by this analysis but conflict with yours.

        -M

        Free your mind