in reply to Can two separate responses be sent to the client's browser from Perl, such as via fork{}?

I would also like to update the requesting webpage at the same time

I don't think that will work. HTTP is a request-response-protocol. One request, one response. A HTTP client (browser) can't accept two responses for one request. That can not be changed.

To make it look like two things happened at the same time, you need to make the browser issue two requests, one for the update, one for the download. That can be done using client-side Javascript (timer) or maybe also using a HTTP "Refresh" header. Typically, you first request the update, and the update triggers the download request. Simply because it is very hard to trigger anything from a download request.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
  • Comment on Re: Can two separate responses be sent to the client's browser from Perl, such as via fork{}?

Replies are listed 'Best First'.
Re^2: Can two separate responses be sent to the client's browser from Perl, such as via fork{}?
by Polyglot (Chaplain) on Oct 15, 2023 at 11:21 UTC
    I appreciate those insights. To help clarify, what I am actually attempting to do is to have a LaTeX server action run which generates its PDF for download. At the same time, it produces a log output of what worked or did not, as is typical for LaTeX, and I want to feed that back to the client along with the file. So the PDF file would be created at the same time the log data is created, and I'm not sure how I would get them both from separate requests.

    Blessings,

    ~Polyglot~

      There are several old postings that may help:


      So the PDF file would be created at the same time the log data is created, and I'm not sure how I would get them both from separate requests.

      I've just updated an old SANE CGI frontend wrapping scanimage to run on an embedded system. It does something roughly similar: In the scan handler, scanimage emits progress messages that are sent to the browser, while the scanned image is stored in a temp file on the server. The last action of the progress display is to emit a download link (for browsers with Javascript disabled) and a Javascript redirection to that link. The handler for the download link just sends the content of the temp file.

      In theory, there should be some code to remove old temp files (e.g. a cron job, a cleanup routine invoked for every request, or simply a call to unlink at the end of the download handler). The CGI should also use individual temp files for each scan. But there is exactly one user for the scanner (me), the CGI frontend is only available in my local network, and I don't care about having an old scan remaining on the scan server, so I use the same temp file for all scans, and I don't even bother to lock the file.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

      Unless you want to do something klunky like wrap the response as JSON (return an Object with a key for the log text and one for the PDF contents (maybe base64-encoded)) you're thinking along the wrong lines. You're going to need to keep context on the server and handle returning things in multiple HTTP requests.

      The way I'd approach is to assign some kind of "job id" to a set of results (the input file, the output dvi or pdf, the log from processing). When you process an input file you'd maintain the context (the results) in some way keyed by the job id (save everything into a temporary directory named after it perhaps). You'd then come up with an API that you can use from clients to request back a result type for a given job id and either link directly or (possibly using some sort of JS) provide an all in one page (maybe fetching the log and pdf and showing those, with a separate link to the pdf results for download).

      The cake is a lie.
      The cake is a lie.
      The cake is a lie.