Personally, rather than a central application parcelling data up, dispatching it to one of a list of known workers, then polling the workers for their results before retrieving them, and dispatching the next batch, I'd turn things around. Use the web server model--and a websever too.
Workers hit one page to get data, and another page to return results. The central controller can be any webserver you like from Apache2 to HTTP::Daemon.
The workers could use LWP to fetch the work_to_do. The page they fetch would contain the data to process and even an embedded code section to use to process it, plus a form on which to return the results. That easily allows for multiple uses and clean segregation of the returned data as the code comes from the page called and the processed data is dispatched back to the address referenced by the form.
Testing can be done by accessing the server using a standard web browser and pasting "processed data". If the application needs to deal with large volumes of return data, use POST instead of GET.
The security of the application can use the standard webserver authentication mechanisms. Assuming that the server is properly controlled and vetted, I'm assuming a LAN/intranet environment, then the code segments should be as secure as the server is. You can even use https: to provide encryption if required.
In reply to Re: Perl Grids
by BrowserUk
in thread Perl Grids
by Eyck
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |