TienLung has asked for the wisdom of the Perl Monks concerning the following question:

I have searched the internet and have found some answers to my question, but none that are in an english I can understand. I have a perl script that is being run on a server, different users logged in at the same time require the script be run. Now I have a few users affiliated with the site I see that it is taking a long time to succesfully complete each request to run the script. The base time to run the script is 4 seconds, with one user. This completion time increases such that each additional request increases in time by 4 seconds.. so it seems the server is queueing up the requests to run the script. Is this correct and normal behavior? or should a server be able to handle multiple requests and spawn new threads/processes? Appologies if this is common knowledge, but I am rather new to server side coding. Many thanks

Replies are listed 'Best First'.
Re: Perl concurrent access
by moritz (Cardinal) on Sep 10, 2009 at 12:52 UTC
    Most web servers start normal CGI scripts in parallel - if you have two requests roughly at the same time, the scripts that answer theses requests also run at the same time.

    Which means that each script only gets half of the available CPU time, and thus takes twice as long to finish.

    Perl 6 - links to (nearly) everything that is Perl 6.
Re: Perl concurrent access
by cdarke (Prior) on Sep 10, 2009 at 13:19 UTC
    I have a perl script
    What kind of thing does it do? For example, if it is a heavy SQL transaction then it might lock the database, or have to wait for other locks.

    that is being run on a server
    Do you mean a web server, like Apache? How are the script runs being submitted?

    it seems the server is queueing up the requests to run the script
    You should be able to see what is running with ps(UNIX) or tasklist(Windows). Is the "server" really queing them up, ir trying to run them concurrently?

      In reply to your first comment, yes it does involve an SQL transaction. It connects to the database, prepares a query, executes it, fetches rows in turn in a while loop, and then at the end disconnects and finishes. You mention a lock, I was not aware about any form of lock (as I said I am a complete novice). So, could it be that because the database connection and query is closed only at the end of the file, the next process needs to wait until the previous query is completed.. So with that knowledge, would it speed things up to do the query at the start of the script, load the response into an array, close the query and connection and then loop through the array?
        It would help a bit if you post the script you are using or at least an example of what it does as that makes things easier to understand for all of us.

        The next thing you should keep in mind is that it always pays of to use finte resources for the shortest amount possibe. Meaning, close a DB connection as soon as you no longer have a need for it. This will be the same for any and all resources like files, printers and well pretty much anything that you can for what ever reason not share with a second instance of the script. So check for file locks or other possible reasons why your script would not want to run more then once.

        Also if you expect to have say 100 users executing this script at roughly the same time or in a relatively short time frame. See if you can maybe change the logic making an caching option for the DB results as it could quite likely be that out of the 100 users some will end up running the same query. Depending on the speed with which the data in the DB is likely to change and the need to have up to date results you might be able to simply use cached results for a certain percentage lowering the load on the database and the risk of tying up your limited resources.
        Not necessarily. If it is just a query (select), rather than an update, delete, or insert, then locks might not be involved. I meant the term "transaction" to be technical, that is possibily involving a lock. So it depends on the type of transaction, and the database. Some databases require table locking and some row locking. Some databases have different requirements depending on the table type (MySQL comes to mind).

        This still assumes that the jobs are running concurrently. Are they?
        It might also help if we knew the OS and which database product.
Re: Perl concurrent access
by TienLung (Acolyte) on Sep 10, 2009 at 14:43 UTC
    A bit more explanation... The script pulls a number of rows from the database, it loops through this data in a while loop. Within this while loop, using the data, it adds information to a PDF page, created using PDF::API2, at the end it closes the database connection and saves the PDF. The time consuming bit is actually creating the PDF. My thoughts at the moment, following on from suggestions here, are that the script is hogging the database connection, not allowing another instance of the script to access the database until the connection is closed by the other script. Secondly, perhaps however PDF:API2 works requires some other resource that is being locked by the system, and so each script has to wait. Or is the PDF creation simply so CPU intensive that it has no choice but to take more time.
      My friend, working with pdf docs is time consuming. There's no way around this. Get more hardware or make peace with it.

      Here.. try this.. As was suggested you can check the machine and what's up with # ps, so..

      Open a cli, and use the # top command. This shows you all the pids in the machine. Now run your program... And you'll see them running.. and you'll see what the memory consumption is. And you'll see your cpu at %03 and then as your little script is called the cpu will jump to %80 and then %95..
      If you have dual core, it will jump to %49 and then as a second script instance comes in, you'll see %99 or so again...

        Many thanks for this reply, that stops me wasting time optimising code etc! I may try and work out a 'ticketing' system, so when people want to create a PDF they can 'take a ticket' and wait in line for a slot. That way I can limit the number of pdf creation scripts in memory at any one time to two or three. I can then throw some more power at it, by upgrading the server from dual core to 8 or 16 core, and throwing more RAM at it.. then hopefully after my wallet recovers.. it should be ok!
Re: Perl concurrent access
by TienLung (Acolyte) on Sep 10, 2009 at 13:15 UTC
    I see, so with many users, 100+, this could take a very very long time! Are there ways round this? or is this a question to ask my server provider?
      Are there ways round this?

      Sure. Profile and optimize your code, or throw more hardware at it.

      (For example for SQL queries it helps a lot to have the appropriate indexes in place).

      Perl 6 - links to (nearly) everything that is Perl 6.
Re: Perl concurrent access
by Anonymous Monk on Sep 10, 2009 at 12:51 UTC
    It depends on your web server program