bartender1382 has asked for the wisdom of the Perl Monks concerning the following question:

This question is also posted on StackOverflow

I have this very simple Perl script on my linux server.

What I would like to be able to do is to call the script from a browser on a separate machine
Have the script initiate a fork ->
Have the parent send an httpResponse -> (freeing up the browser)
Immediately end the parent ->
Allow the child to do its job, heavy complex database work, which could take a minute or two ->
Have the child end itself with no output whatsoever

When I call this script from a browser, the browser does not receive the sent response till the child is complete.

Yes, it works when called from the command line.

Is what I want to do possible?
p.s. I even tried it with Proc::Simple->start, but I get the same hangin up.

#!/usr/bin/perl local $SIG{CHLD} = "IGNORE"; use lib '/var/www/cgi-bin'; use CGI; my $q = new CGI; if(!defined($pid = fork())) { die "Cannot fork a child: $!"; } elsif ($pid == 0) { print $q->header(); print "i am the child\n"; sleep(10); print "child is done\n"; exit; } else { print $q->header(); print "I am the parent\n"; print "parent is done\n"; exit 0; } exit 0;

Replies are listed 'Best First'.
Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by dave_the_m (Monsignor) on Apr 07, 2022 at 08:24 UTC
    The direct answer to your question is that the filehandles which form the connection to the web client get duplicated in the child, so the connection doesn't get closed until both the parent and child have closed the filehandles. So normally the parent would fork, the child would immediately close any filehandles it shares with the parent, then go off and do its thing. In the meantime the parent handles doing any HTML output then quits.

    Note that you show both the parent and child outputting the headers, which is wrong.

    But more generally this arrangement is likely to be a bad idea. It's very easy for someone to maliciously or inadvertently send many short requests to the web server, which results in thousands of long-running forked processes vying for memory and CPU.


Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by haukex (Archbishop) on Apr 07, 2022 at 13:59 UTC

    Although the following is probably overkill - plus it won't run as-is as a CGI script - I've wanted to test out code like this for a while now so I took this oppertunity to write this example up using Mojolicious. And in the process, hopefully show some of the advantages of modern web technologies and frameworks over :-) This can be run as a standalone server in development mode via morbo, and as a simple server via perl daemon.

    #!/usr/bin/env perl use 5.028; use Mojolicious::Lite -signatures; use Mojo::JSON qw/encode_json/; use Mojo::Util qw/sha1_sum/; # NOTICE: This script is designed to work in a single-threaded, # single-process server only! (morbo or Mojolicious::Command::daemon) get '/' => sub ($c) { $c->render(template => 'index') } => 'index'; my %runningprocs; post '/submit' => sub ($c) { # form variables my $foo = $c->param('foo'); my $bar = $c->param('bar'); # set up the event dispatcher my $ee = Mojo::EventEmitter->new; # hash collisions theoretically possible but very unlikely (could +check `exists $runningprocs{$id}`) my $id = sha1_sum( time."\0".rand."\0".(0+$ee) ); $runningprocs{$id} = $ee; $c->render(json => { eventurl=>$c->url_for('status', id=>$id) }); # set up and run the subprocess my $subproc = Mojo::IOLoop->subprocess; $subproc->on(spawn => sub ($sp) { $ee->emit(status => { progress=>"Subprocess spawned in PID + ".$sp->pid }) }); $subproc->on(progress => sub ($sp, @data) { $ee->emit(status => { progress=>\@data }) }); # give client a second to connect to event source Mojo::IOLoop->timer(1 => sub { $subproc->run( sub ($sp) { return long_running_subprocess($sp, $foo, $bar +) }, sub ($sp, $err, @results) { if ($err) { $ee->emit(status => { error=>"$err", done= +>"Error: $err" }) } else { $ee->emit(status => { done=>\@results }) } # don't clobber the event listener immediately (in cas +e client took longer to re/connect) Mojo::IOLoop->timer(10 => sub { delete $runningprocs{$ +id} }); }); }); } => 'formsubmit'; get '/status/:id' => sub ($c) { my $id = $c->stash('id'); my $ee = $runningprocs{$id} or return $c->reply->not_found; $c->inactivity_timeout(300); $c->res->headers->content_type('text/event-stream'); $c->write; my $timerid = Mojo::IOLoop->recurring(10 => sub { $c->write(":\n\n") }); # comment as keepalive my $cb = $ee->on(status => sub ($ev, $data) { my $json = encode_json($data) =~ s/\n//gr; $c->write("event: status\ndata: $json\n\n"); }); $c->on(finish => sub ($c) { $ee->unsubscribe(status => $cb); Mojo::IOLoop->remove($timerid); }); } => 'status'; sub long_running_subprocess { my ($subproc, $foo, $bar) = @_; # this code is now running in the subprocess! $subproc->progress("Beginning work on Foo='$foo'"); sleep 5; # $subproc->progress("Finished work on Foo"); if ( length $bar ) { $subproc->progress("Beginning work on Bar='$bar'"); sleep 5; # $subproc->progress("Finished work on Bar"); } return "All done!"; } app->start; __DATA__ @@ index.html.ep % layout 'main', title => 'Hello, World!'; <div> %= form_for formsubmit => ( method=>'post', id=>'myform' ) => begin <div> %= label_for foo => 'Foo' %= text_field foo => ( placeholder=>"Foo", required=>'required' ) </div><div> %= label_for bar => 'Bar' %= text_field bar => ( placeholder=>"Bar" ) </div><div> %= submit_button 'Process' </div> %= end </div> <pre id="myoutput" style="padding:3px 5px;border:1px solid black;"> Output will display here. </pre> <script> "use strict"; function addmsg(txt) { $(document.createTextNode(txt)).appendTo($('#myoutput')); } function getevents(url) { addmsg("Listening on "+JSON.stringify(url)+"\n"); var events = new EventSource(url); events.onerror = function(err) { // the event apparently doesn't contain any details var errmsg = "Error connecting to EventSource"; addmsg(errmsg); alert(errmsg); $("#myform :input").prop("disabled", false); }; events.addEventListener('status', function (event) { var data = JSON.parse(; if ( 'progress' in data ) { addmsg("Progress: "+JSON.stringify(data.progress)+"\n"); } if ( 'error' in data ) { addmsg("Error: "+JSON.stringify(data.error)+"\n"); alert(data.error); } if ( 'done' in data ) { addmsg("Done: "+JSON.stringify(data.done)+"\n"); events.close(); $("#myform :input").prop("disabled", false); } }, false); } $(function () { $('#myform').on('submit', function (e) { e.preventDefault(); $("#myoutput").text("Submitting form\n"); var thedata = $('#myform').serialize(); // before disabling! $("#myform :input").prop("disabled", true); $.ajax({ type: 'post', url: '<%= url_for 'formsubmit' %>', data: thedata }) .done( function( data ) { getevents(data.eventurl); }) .fail( function( jqXHR, textStatus, errorThrown ) { var errmsg = "Form submission error: "+textStatus +" / "+jqXHR.status+" "+errorThrown; addmsg(errmsg); alert(errmsg); $("#myform :input").prop("disabled", false); }) }); }); </script> @@ layouts/main.html.ep <!DOCTYPE html> <html> <head> <title><%= title %></title> <meta name="viewport" content="width=device-width, initial-scale=1 +.0"> <link rel="stylesheet" href=" +ormalize.min.css" integrity="sha512-NhSC1YmyruXifcj/KFRWoC561YpHpc5Jtzgvbuzx5Voz +KpWvQ+4nXhPdFgmx8xqexRcpAglTj9sIBWINXa8x5w==" crossorigin="anonymous" referrerpolicy="no-referrer" /> <script src="" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4= +" crossorigin="anonymous"></script> </head> <body> %= content </body> </html>

    If this needed to run in a threaded/multiprocess HTTP server, it would even be possible to replace the communication via EventEmitter objects with a system like Redis - it's pretty simple to spin up a server via Docker and connect to it using e.g. Mojo::Redis::PubSub.

      Is this not reinventing part of Minion?
        Is this not reinventing part of Minion?

        Depends on which part you mean. Minion is a good suggestion, but it also depends on the OP's requirements - if it's just a single task, then I think my code is good enough, but if OP needs to run more subprocesses then Minion's features would certainly be an advantage. However, AFAICT Minion doesn't support EventSource, which was a major point of my post.

        Edit: Minor clarification.

Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by GrandFather (Saint) on Apr 07, 2022 at 04:02 UTC

      Do the replies to Managing a long running server side process using CGI help?

      There's a lot to go through there, but I am trying to avoid the CRON path. Just seems kludgy to me

      However a thought hit me, and I'll have to reread the link you posted a few times to see if it's in there:

      Is it possible to have a perl script, initiated from a browser, manually construct the proper HTTP::Response, and send it in the middle of my perl script? Theoretically, that would free the browser at the client end, and the rest of the perl script would contain the child code, which has no output, and keeps on running till completion.

      Is such a beast possible? And even if so, how dangerous would it be?

      That's a lot of "ifs" but just trying to look at it from a different perspective

        There's a lot to go through there, but I am trying to avoid the CRON path. Just seems kludgy to me

        Not really. If you need something to run independent from webcalls, that's how you should design it. One way of avoiding CRON would be to have a background process that always runs and get's controlled from the CGI scripts via interprocess messaging. There are many, many solutions on how to do this. As the author of the Net::Clacks module, this is what i usually recommend (since then i would be able to help you if you run into problems). There's a slightly outdated Howto on PM, see Interprocess messaging with Net::Clacks, and the package comes with some example programs as well.

        perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'

        Did you get as far as my own reply and its follow up in that thread? It looks like a pretty good fit for what you want. The key is the session management stuff that allows the long running app to communicate back to the manager app.

        Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by 1nickt (Canon) on Apr 07, 2022 at 14:55 UTC
Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by haukex (Archbishop) on Apr 16, 2022 at 19:03 UTC

    I know this is an older thread but I saw that on StackOverflow, two days after this question, you posted "How can I have one perl script call another and get the return results?" involving the use of system, and you got a response from the venerable brian_d_foy. With all due respect to him, I do have to say that I disagree with the suggestion of system("$^X /var/www/cgi-bin/ filename=$filename"), especially from a CGI script. I wrote a longer node about the security issues (!!!) that the use of system with a single argument string has, and how to avoid them, here: Calling External Commands More Safely.

    At the very least, you should use the multi-argument form system($^X,'/var/www/cgi-bin/',"filename=$filename") - but even better would be a module like in this case IPC::System::Simple, as its systemx function guarantees to never invoke the shell, and its error handling is much better.

Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by Anonymous Monk on Apr 07, 2022 at 08:57 UTC
Re: Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?
by sectokia (Pilgrim) on Apr 07, 2022 at 12:15 UTC

    My suggestion would be to just run your own http event based sever in perl.

    Example this would wait for request to http://localhost/doBigThing, then it sends back text message, then it start doing the big fancy database stuff, before going back to waiting again:

    use AnyEvent; use AnyEvent::HTTPD; my $httpd = AnyEvent::HTTPD->new (port => 9090); my $cv = AnyEvent->condvar; $httpd->reg_cb ( '/doBigThing' => sub { my ($httpd, $req) = @_; $req->respond ({ content => ['text/html',"OK starting big datab +ase thingy now... I will be busy for several minutes.... " ]}); $cv->send; } ); while (1) { $cv->recv; print "Doing big fancy database stuff for a long time here..."; $cv = AnyEvent->condvar; }

      Someone gave name the answer over at:

      StackOverflow User: mob

      In general you must detach the child process from its parent to allow the parent to exit cleanly -- otherwise the parent can't assume that it won't need to handle more input/output.

      } elsif ($pid == 0) { close STDIN; close STDERR; close STDOUT; # or redirect do_long_running_task(); exit;

      In your example, the child process is making print statements until it exits. Where do those prints go if the parent process has been killed and closed its I/O handles?