May I recommend that you restructure your process into two separate things:
- Run the simulation
- Display the information
If you split your program up into these two parts, you can easily apply Watching Long Processes Through CGI, if your simulation process just creates a log file and writes to it. This also means that multiple users can watch the progress of a single simulation and that one user can have multiple simulations running at the same time.
| [reply] |
I built something like this for an EMS system using SVG as the user interface a few years ago. A similar approach would probably work in HTML.
In my case,
- the first request sent the whole page with current data.
- Scripting in the page performed a new request at the specified interval to retrieve data (not a whole new page).
- When new data was received, the script updated the interface accordingly.
With a careful design, you can send a very small amount of data with each update compared to what would be needed to replace the whole page.
In order to avoid restarting the simulation, you could run the simulation as a separate process and communicate with it through the CGI, instead of running the CGI process indefinitely.
Another possibility (that I haven't tried, so take with a grain of salt) would be to make a request to the simulation immediately after the page loads and process the data as an incoming stream. Take each line or record of the data stream and update the interface as I described above.
The key to both of these approaches is client-side intelligence to update the screen, rather than rewrite it all on the server side.
| [reply] |
Dear all,
Thanks for the suggestions.
What I have decided to do is fairly straightforward and easy so it doesn't have to do anything fancy like forking processes as suggested in the article pointed out.
As the simulations are not time critical (the 10 seconds was purely for observation purposes so that people had time to study a screen of data looking for salient points), then the perl program has time to read a shed load of data if required.
SO:
(upon PC startup in the morning) the perl program initialises all its variables then checks for the existance of a specifically-named data file. If file does not exist, program waits for simulator input to start the whole process for the day. Forms HTML file with first pass of data received, displays HTML on screen. The perl file then stores ALL relevant pieces of data that it needs in the specifically-named data file.
The HTML file displaying the data, after a delay (the 10 seconds above)(via Javascript as the META tag "refresh" is NOT standard and should not be used according to the W3C's own technical pages) calls for itself to be refreshed by running the perl program again.
The perl program, upon entry again from the very start, again initialises all its variables. As before, it checks for the existance of the specifically-named data file (remember it was created at the end of the very first daily pass), sees that it is there and reads in the values that it requires from the file, over-writing the defaults that were set up by the initialisations. Creates new HTML file/display with the new values and away it goes until it is time for cocoa at bed time.
Ths interaction here gives both the perl and the html plenty of time to do their job (remember there is nothing else going on on this particular PC), gives personnel time to read what is on the screen, and enables the perl program to 'remember' what went before. It is also easy to document in both paper and program terms so that anyone with only a limited knowledge of perl and html can adapt easily if so required.
Things are chugging away merrily as I write!
Thanks for the suggestions
| [reply] |
Corion,
I may have misled you slightly - the simulation is a real time one that is on-going for several weeks.
What is required is that the simulator program starts up on its dedicated PC in the morning when it is turned on and runs through the day until it is switched off at night. The display program also runs on this dedicated PC showing real-time analysis and summary information, taking the feed from the simulator every 10 seconds (after each stage) and displays this data.
What I want is for the cgi program displaying the data to show a new page every 10 seconds, rather than (at the moment) tagging the HTML page on to the bottom of the previous one.
Does that help any? | [reply] |
#!perl -w
use strict;
use CGI;
do_all_the_initialisations;
while (1) {
my $information = get_information;
print <<HTML;
This is my HTML page with $information
};
And basically, that's not how CGI works. A CGI program is supposed to output one page worth of HTML and then exit. So you will either need to get rid of the loop and (re)do the initialization, or create the HTML page from a third program every 10 seconds and just serve that page as a static HTML file. | [reply] [d/l] |