in reply to secure and scalable client server file transfers

curl to transmit a file to a listening windows service on the same network - that service then performs a number of actions on the file and returns the modified file.

As far as I am aware, curl can either transmit or receive a file using various protocols. So, the client script call curl to upload the file to the server using some protocol (which?), and then the client script does what? Ends?

And then the server does its thing to the files,...And then?

In essence, you need to be a lot clearer about how the existing system works; what its limitations are that you wish to address; what your requirements, priorities and goals are for the new system; before anyone could even begin to suggest alternatives.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re: secure and scalable client server file transfers

Replies are listed 'Best First'.
Re^2: secure and scalable client server file transfers
by derekw (Initiate) on May 26, 2011 at 10:16 UTC

    Apoligies for not being clearer. The scale issue lies not with curl,but with the non-Perl server.It was not written to cope with the number of file modification requests it now receives.

    The client script transmits the file using curl to the waiting Python Twisted web server. The server gets an HTTP Post and the client script waits whilst the server works on the file - then the server sends back the modified file to the client.

      It was not written to cope with the number of file modification requests it now receives.

      Then you need to identify where the limitation lies.

      • Is it that the web-server can only service a limited number of concurrent connections?
      • Or that the server hardware cannot cope with processing the number of concurrent requests?

        If this is the case, then there are three possible reasons:

        1. The web server has a (programmed) limit on the number of concurrent connections it will allow.

          Use a better web server.

        2. The server hardware max's out all its cpus/cores and can still not keep up with demand.

          Purchase bigger hardware. Or a employ/purchase a second (or more) box(s) and have the web server hand off (distribute) the cpu intensive processing across the boxes. </ii>

        3. The web-server is unable to utilise all the cpus/cores the hardware has available.

          Use a better web server.

          Or, run multiple copies of the existing one on different ports and have the front-end (running on the current port) redirect the incoming connects to the other ports for service.

      You are probably better off using an existing web server, than trying to write your own in Perl. It doesn't have to be a behemoth like Apache, something simple and efficient like Thttpd is probably a better choice for something like this.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      Does the server only handle one request at a time? In that case a rewrite will help

      Or does the server already work on many requests in parallel? In that case the scaling issue is probably the hardware of your server, memory size, CPU, network or hard disk speed. You won't get any speedup without throwing hardware at the problem.