Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Dear monks,

I have a long running process (C program) that is invoked by the perl script and every second sends back statistical data to the perl script. The script than needs to display it on the web page. Since my server push method is timing out intermittently, I have decided to give client pull a try. First, I displayed a dummy page with a "refresh" header of 3 seconds. During this 3 seconds, my perl script would retrieve data from the C program and create an html page on disk named page1.html. After displaying the dummy page for 3 seconds, the "refresh" would take the browser to page1.html and display it for 3 seconds while the perl script creates page2.html with new data. Page1.html then "refresh" and called page2.html to be displayed for 3 seconds. Page2.html then "refresh" and called page1.html and so on until the process is finished. The problem I am encountered is that the browser does not "refresh" until the entire perl script finished running which could last more than an hour. During this time only the dummy page is displayed and when the script is done, page1.html and page2.html get displayed and called each other. This is my sample script where sleep 30 simulates the long running process. Did I do something wrong?

#!/usr/bin/perl
#use HTML::Template;
#use CGI qw(:standard);
$| = 1;
if ($pid = fork)
{
sleep 30;
}
else
{
print "Content-type: text/html\n\n";
#print '<HTML>';
print '<HEAD>';
print '<META HTTP-EQUIV="Refresh" CONTENT="3; URL=/cgi_data/page1.html">';
print '<TITLE>New Site notification</TITLE>';
print '</HEAD>';
print '<BODY>';
print 'My homepage has moved to a new location. ';
print "I am here";
print '</BODY>';
print '</HTML>';
exit;
}

Replies are listed 'Best First'.
Re: Client Pull Not Working correctly
by holli (Abbot) on Aug 24, 2007 at 09:09 UTC
    I've solved a similar problem lately, namely importing a big-csv file of records into my system. I'll lign out my solution path, maybe it helps you.

    The problem:

    User should be able to upload a file that gets sanity checked and imported into a database. As such files can be very big, and the import can take long the user shall be presented a progress bar until the import has finished.

    The idea:

    • User triggers upload
    • Server takes file and saves it into an "imcoming" directory.
    • Server generates a unique token and renames the csv file accordingly
    • Server creates a "progress counter file" for that token and initializes it with "0"
    • Server creates a non blocking call to the "background-importer" and tells it what file (and therefore token) to import
    • Server sends back the html with the progress bar and a piece of Javascript to update it.


    The biggest part of the problem is to make the call to the importer "non blocking" so that the caller can end and send it's response. That's the reason why forking is not an option here. My first attempt to solve this was by using IPC::Open2, but that didn't work out.

    I found the easiest way to do so is to use LWP::UserAgent with a timeout of 1. In my case, I've actually used an RPC call to a webservice, but the principle remains the same: no blocking by making an http-request and not waiting for the response.

    So what we now have is a background-job on the server that does the importing. While running, this job will also update the progress counter file.

    The last part of the setup is a little server component that will be passed a token and return the progress of that token (means it will read out the progress counter file and return the content as plain text).

    This component will be called by the Javascript in order to update the progress bar.

    The solution:

    This is the import controller: This is the html for the progress bar This is the updater JS And finally the server sided progress component The snippets are directly taken from the production server, and contain german. Let me now if that is problem.


    holli, /regexed monk/
      Thanks. I could use some code from your example.
Re: Client Pull Not Working correctly
by Anonymous Monk on Aug 24, 2007 at 09:41 UTC
    Try this
    #!/usr/bin/perl -- use CGI qw(:standard); $| = 1; if (my $pid = fork) { #parent does print header(), q~ <HTML> <HEAD> <META HTTP-EQUIV="Refresh" CONTENT="3; URL=/cgi_data/page1.html"> <TITLE>New Site notification</TITLE> </HEAD> <BODY>My homepage has moved to a new location. I am here</BODY></HTML> ~; } elsif (defined $pid) { #child does close STDOUT; #tell apache no more output sleep 30; } else { die "Cannot fork: $!"; } exit 0;
      From this example and from the link of the first response above, I guess all I need is to insert the

      close STDOUT;

      in the child process after I print the html page but it does not seem to do the trick. Your example does not work either. The page is still hanging 30 seconds before it gets redirected to page1.html for some reason. I have the example at

      http://129.107.52.101/cgi-bin/testpull.cgi
Re: Client Pull Not Working correctly
by Anonymous Monk on Aug 24, 2007 at 08:08 UTC