renzosilv has asked for the wisdom of the Perl Monks concerning the following question:

Hello:

I am trying to write a script which starts a program and then I need to run a few more commands to modify this running program. My problem is that once I start my program I only come back to the script once this program is terminated by the user. I know I could use the exec() call to start the program and then proceed to run the rest of my commands. However, I would like to only run two more commands and then wait for the program to be terminated by the user. So being able to run a system() call with multiple commands would be ideal for me. Is there any way I can do this ?

I tried doing something like this, which didn't work.
system ("vp -e " . $ENV{'VREL'} ; "vscmd set-measure-mode fast";"vscmd + pg-address-check off");
I know this code will do what I need. However I need a way to know when my vp program gets terminated by the user so I can run a few more commands.
exec ("vp -e " . $ENV{'VREL'} ); exec( "vscmd set-measure-mode fast"); exec("vscmd pg-address-check off");
Thanks for any help.

Replies are listed 'Best First'.
Re: Multiple commands with one system call
by kennethk (Abbot) on Sep 29, 2011 at 14:34 UTC
    I know I could use the exec() call to start the program and then proceed to run the rest of my commands.

    No. That's not how exec works - it replaces the location that perl is running with the specified system command. This means any code after an exec is dead code.

    You may get your expected result with something closer to:

    system ("vp -e $ENV{VREL} ; vscmd set-measure-mode fast; vscmd pg-address-check off");

    Short of that, you'd probably need some sort of daemon, which is likely far more sledge than this hammer needs.

      I would perhaps try this:

      if (system(qw/vp -e/, $ENV{VREL}) != 0) { # last command failed system(qw/vscmd set-measure-mode fast/); system(qw/vscmd pg-address-check off/); }

      I'm not sure which program gets the ctrl-C, but if it's not the Perl script, that should work.

      Thanks for the correction, regarding exec(). Is always good to learn new things. :-)

      The format you suggested doesn't work. It only takes the first command and it ignores the other two.

      Thanks for the input though!

        Are you sure about that? When I execute:

        perl -e 'system("echo 1;echo 2;echo 3")'

        I get the output:

        1 2 3

        Are you interupting execution, with perhaps ^c? In this case, yes, you are killing all jobs simultaneously. You could execute all three by splitting into multiple system calls:

        system ("vp -e $ENV{VREL}"); system ("vscmd set-measure-mode fast; vscmd pg-address-check off");
Re: Multiple commands with one system call
by jethro (Monsignor) on Sep 29, 2011 at 15:09 UTC

    One simple way to start more than one program simultaneously (on linux) is to use the shell to execute them in the background. I.e.

    system("program1 &"); system("program2 &");

    would run both programs concurrently, the last one wouldn't even need to run in the background. Programs in the background should not do any screen IO because they are detached from the terminal screen. But you can redirect output to files or /dev/null if that output is uniteresting

    Another possibility is to use open with a pipe, i.e. open($f,"-|","program1"); which would enable you to get further input over STDIN to the program.

    A third possibility is to use fork to do the (implicit) forking done by the previous methods yourself. And probably the very best possibility is to let a module like Parallel::ForkManager handle most of that work.

      Hey!

      I had tried using the system call with the "&" sign but the problem with that is that I also need to know when this program is closed by the user so I can run clean up commands. When I make a regular system call I can just run my clean up commands after because they won't be run until after the program is closed.

      Is there an easy way for me to catch when this program is being closed ?

      Thanks again for all your help.

        It's worth considering whether you need this degree of control, or whether it would be enough to do something simpler...say, run a job once an hour automatically.

        If you need this degree of control and interactivity, I would have a wrapper script which launches your program, storing the process id (PID ) of the process. Then at regular intervals it can verify the process is still running, using ps. When the process stops, the wrapper can send a message.

        A simple way to indicate the status is to create an indicator file, and delete it at the end. You can 'touch' the file at intervals to demonstrate the process is running and not crashed. Alternate, using the open with a pipe strategy, you can send text messages at intervals, 'still running', 'crashed' or 'completed'.

        As Occam said: Entia non sunt multiplicanda praeter necessitatem.

        Sure, lots of ways. For example instead of calling your program you could call a wrapper script that calls your program and on exit creates a file/appends to a file/touches a file. This can be detected by your script

        #!/bin/sh # wrapper script program1 touch /tmp/finishedwith$1

        You could call this from perl with a random number as parameter (the $1) so that each finish would create a different file. In the perl script you would monitor for the existance of this file

        This is easy but not very clean and safe. Another way is to use fork instead of system, because fork tells you the process id of the newly created process. And you can wait() for the process to finish (which would block your script) or check if the process is still running (somehow ;-).

        But really, look at Parallel::ForkManager, all the functions you want are already there. With "run_on_finish" you can choose a subroutine that is called when the process finishes.

Re: Multiple commands with one system call
by hbm (Hermit) on Sep 29, 2011 at 15:05 UTC

    How about wrapping those three commands into one shell script/batch file; and then calling that one wrapper?

Re: Multiple commands with one system call
by graff (Chancellor) on Sep 30, 2011 at 05:57 UTC
    I'm not sure if I fully understand your statement of the task, but in case it helps, here's a technique I've used to good effect when I want a perl script to run a series of commands in sequential order, and the script has to wait until all the commands are done before moving on:
    #!/usr/bin/perl use strict; use warnings; my @cmd_list = ( 'echo 1', 'echo 2', 'echo 3', 'echo 4' ); $|++; # turn off buffering my $shpid = open( my $sh, '|-', '/bin/bash' ) or die "Can't open a shell process: $!\n"; for my $cmd ( @cmd_list ) { print $sh "$cmd\n"; } print $sh "exit\n"; close $sh; waitpid( $shpid, 0 ); print "Shell's all done. Moving right along...\n";

      Hey!

      I looked around online for answers but I couldn't really find the correct format to open a bash script through perl and yours is giving me an error.

      my $shpid = open( my $sh, '|-', '/bin/bash' ) or die "Can't open a shell process: $!\n";

      It says that, the preceding line has errors. I don't know enough about bash to fix it. Any chance you could help me out ?

        Hey yourself!

        Is there something that makes you unable to paste the actual text of the error message into your post? What does "It" refer to? What does "the preceding line" refer to? What exactly did you try, and what exactly was the full error message? Were you using the code as I posted it, or did you try something "based on" my post? What OS are you using?

        If you could give more explicit information about the problem(s) you're having, you might get some useful help.