Konda has asked for the wisdom of the Perl Monks concerning the following question:

<HTML> I am using perl to run a .cgi script that will enable a user to update a move list via FTP. (yes this question is somewhat related to the "big mess CGI/frames" subject, but involves a different problem altogether)

If one proceeds to:

http://www.rit.edu/~ee697b/cgi-bin/filedif.html

and presses the button it will try to run a script that is called "change.cgi". This will run for a long time, seem to do nothing - and then time out.

What change.cgi does is constantly goes to a file called coord.dat, reads from it, and makes another .html out of it called coord.html. This is because it doesn't quite know when a user will update the file via FTP.

The source for the change.cgi can be read at:

http://www.rit.edu/~ee697b/cgi-bin/change.txt

This leads to my question - is there any way in perl to run a .cgi script from a web page in the background? (i.e. click the button once, and the script runs, without the browser thinking it has to reach a destination, and thus wait until it has achieved that destination ?)

Can a process be spawned off, and then expire on its own, later, without the web browser timing out? (I've had this happen before, but only if I run the script directly from my shell account)

OR - is there a way to script a command that temporarily stops the browser *until* a new move pops up (file size is larger, move list array appended by one line, etc...)

Also - this might be asking the same thing as above... is there an equivalent to a "system()" function, as in C, where you can run another .exe or process compiled by another language, process, etc external to perl?

Thanks for your time.

</HTML>

Replies are listed 'Best First'.
Re: semi-finite background process
by btrott (Parson) on May 06, 2000 at 21:28 UTC
    Yes, there's a system function. In fact, it's even called system :). There's also exec, which, in combination with fork, might work for you here.

    I think what you might want to do is to do a fork in your CGI program. In the parent, just exit. In the child, you close STDIN, STDOUT, and STDERR, then exec your new process.

    The key is that you close the open (shared) filehandles; this breaks the connection between the parent and child. Otherwise, the parent won't exit.

    Here's some sample code:

    my $pid = fork; die "Couldn't fork: $!" unless defined $pid; if ($pid) { # parent print "Forked off a child!"; exit; } else { # child close(STDIN); close(STDOUT); close(STDERR); exec "/foo/bar/long/process" or die "Can't exec: $!"; }
RE: semi-finite background process
by merlyn (Sage) on May 06, 2000 at 22:21 UTC
Re: semi-finite background process
by zaphod.nu (Scribe) on May 06, 2000 at 23:46 UTC
    Making your script print a "Location: " header before it starts to grind should produce about the same result, this would however not be able to produce some new stuff within that session. Unless you make it write a standard .html file that you can access without running the script.
      zaphod.nu said:
      Making your script print a "Location: " header before it starts to grind should produce about the same result, this would however not be able to produce some new stuff within that session. Unless you make it write a standard .html file that you can access without running the script.
      Which is not sufficient. At least not for Apache (the number one server). The server must see an exited process and eof on STDOUT from the CGI process before it will release the browser and consider the transaction complete.
RE: semi-finite background process
by princepawn (Parson) on May 07, 2000 at 20:29 UTC
    You can use fork() but be sure to close STDOUT in your child process or else the parent will wait on the child. This worked for me in a similar situation.