derekw has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

I have inherited a system (not in Perl) that uses a client (windows or *nix) script and curl to transmit a file to a listening windows service on the same network - that service then performs a number of actions on the file and returns the modified file.

Multiple clients can and do use this script - on different machines but always accessing the same server.

I would like to rewrite it in Perl - as although the current version is working , it is not easily maintainable or scalable.

I humbly seek advice on how the Monks would transfer the file so that it is secure and that the process is scalable - there are so many excellent modules that could do this it seems to me - I'm not sure where to start!

  • Comment on secure and scalable client server file transfers

Replies are listed 'Best First'.
Re: secure and scalable client server file transfers
by BrowserUk (Patriarch) on May 26, 2011 at 09:48 UTC
    curl to transmit a file to a listening windows service on the same network - that service then performs a number of actions on the file and returns the modified file.

    As far as I am aware, curl can either transmit or receive a file using various protocols. So, the client script call curl to upload the file to the server using some protocol (which?), and then the client script does what? Ends?

    And then the server does its thing to the files,...And then?

    • Does the client wait for the modified file to appear (where?) and then fetch it?
    • Does the server send the modified file back?

      How? To where? Using what protocol?

    • Other?

    In essence, you need to be a lot clearer about how the existing system works; what its limitations are that you wish to address; what your requirements, priorities and goals are for the new system; before anyone could even begin to suggest alternatives.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Apoligies for not being clearer. The scale issue lies not with curl,but with the non-Perl server.It was not written to cope with the number of file modification requests it now receives.

      The client script transmits the file using curl to the waiting Python Twisted web server. The server gets an HTTP Post and the client script waits whilst the server works on the file - then the server sends back the modified file to the client.

        It was not written to cope with the number of file modification requests it now receives.

        Then you need to identify where the limitation lies.

        • Is it that the web-server can only service a limited number of concurrent connections?
        • Or that the server hardware cannot cope with processing the number of concurrent requests?

          If this is the case, then there are three possible reasons:

          1. The web server has a (programmed) limit on the number of concurrent connections it will allow.

            Use a better web server.

          2. The server hardware max's out all its cpus/cores and can still not keep up with demand.

            Purchase bigger hardware. Or a employ/purchase a second (or more) box(s) and have the web server hand off (distribute) the cpu intensive processing across the boxes. </ii>

          3. The web-server is unable to utilise all the cpus/cores the hardware has available.

            Use a better web server.

            Or, run multiple copies of the existing one on different ports and have the front-end (running on the current port) redirect the incoming connects to the other ports for service.

        You are probably better off using an existing web server, than trying to write your own in Perl. It doesn't have to be a behemoth like Apache, something simple and efficient like Thttpd is probably a better choice for something like this.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        Does the server only handle one request at a time? In that case a rewrite will help

        Or does the server already work on many requests in parallel? In that case the scaling issue is probably the hardware of your server, memory size, CPU, network or hard disk speed. You won't get any speedup without throwing hardware at the problem.

Re: secure and scalable client server file transfers
by locked_user sundialsvc4 (Abbot) on May 26, 2011 at 11:19 UTC

    If “you don’t know where to start,” then that is where you must start.

    The first thing that you’ll need to be making here is a defensible business case, and a defensible and implementable business plan.   This plan must consider all currently-available options, including commercial offerings and also including leaving the system right where and as it is.

    There’s no law against a system that smells bad, if it works and if it still has commercial life left in it.   That “commercial life span” is surprisingly short in some cases, but perhaps longer than you wish.   In any case, if the business case exists for fundamentally changing the system that you have inherited, your approach is going to have to be an incremental one.   It might involve Perl.   It might not.

    I am familiar enough with Python, and with Twisted, to say with considerable confidence that the life of such a system ought to be nowhere near “over,” and, even though you will never look at the Tab key in the same way again, it is to me just about as impressive a tool as Perl is.   And there is a lot of very solid stuff out there ... Django, anyone? ... which is pumping a lot of iron.   If you tell me that such a system “has to be” decomm’d, I will not be persuaded easily.   Let’s say for the sake of argument that, “that door is now at least for now Officially Closed.™   You can’t rewrite the system.”   What else might we consider to do?

    Perhaps what you need is better workflow-processing.   Farming out the work to a batch system that could be written in any number of languages (and which probably will consist of an off-the-shelf existing tool).

Re: secure and scalable client server file transfers
by Anonymous Monk on May 26, 2011 at 09:23 UTC
    , it is not easily maintainable or scalable.

    What makes it hard to maintain? FWIW, it probably *is* scalable, curl is fast.