If this is a private project, the "process" doesn't need to run very often and finishes in a forseeable amount of time, you could just write your result in a "buffer.txt" within your webservers document root and use curl, wget or others th fetch this file.
If you need to be sure that the job has finished before the download runs, create it as "buffer.txt.new" and rename it as soon as you're done. On the other side, curl and wget have parameters which let them download only newer-than-local - files.
I'm using wget this way and parse the output to fetch files from an FTP server and process them only when they're updated.
This solution is done very fast, but there are many things which could go wrong.
The second sample is near by the first, but this time the download isn't triggered by cron or anything else which is timed, but by the server running the process. An abstrace CGI there could be:
trigget_download.cgi could be just a bash script which starts curl/wget or a Perl CGI which does the same and then processes the result. (Actually, I'm using curl much more often than LWP, because a simple one-line curl call requires about a dozen lines with LWP :-( ).Parse_the_parameters(); Run_the_process(); Call_URL(http://server2/trigger_download.cgi);
If you got a commercial environment and both servers could reach each other only via internet, you should add authentication on both sides (htaccess Basic Auth may be enough) and let the called server ack that the request has been fully processed and the calling server must retry if this isn't received.
Final note: If you're using SOAP without being absolutly forced to, you should read the first two paragraphs of SOAP::Simple's description on CPAN.
In reply to Re: Buffering and data tranmission in perl
by Sewi
in thread Buffering and data tranmission in perl
by py_201
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |