apprentice has asked for the wisdom of the Perl Monks concerning the following question:

Please forgive my asking this question here. I am not a LINUX sysadmin and don't know a good place to post this question, but would happily take suggestions!

I'm in QA and testing our new "box". I have a PERL script that attempts to find out how many simultaneous TCP sessions can be run through the box (currently I am able to use either Telnet or FTP for my test). The script hits the max at a certain limit (268 FTP sessions). The limit seems to be on the server because any command on the server (such as ls) yields errors to the effect of too many files open on the system until I kill one or two of the FTP sessions from the client side.

Question 1: The problem seems not to be a limitation on number of FTP sessions are allowed, but rather some system resource limitation (seems to be for the whole system, not for an individual process) and the limit seems pretty low (seems to be 328 processes running, but I can't say how many file descriptors or inodes--anyone know how to find that out?). Does anyone know how to find out the limitation or point me to the right place to ask?

Question 2: Once I get beyond problem 1, I may hit some limitation in the server software, so I'm considering writing my own echo server. However, if I do that, basing it on examples in Network Programming with Perl (ch. 10), do all the pipes that will be open to sockets count as open files, etc.? I.e., won't I just be hitting the same similar system resource limitation (perhaps at some higher number)? Other "boxes" similar to the one we are building list the number of simultaneous TCP connections through them in the 4000-8000 range, so I'm need to test at least that high.

Thanks in advance for any helpful hints, and, please, no flames, I burn easily! :) Apprentice

Replies are listed 'Best First'.
Re: System resource limitation question
by robin (Chaplain) on Jan 08, 2002 at 00:03 UTC
    If you want to see how many file handles are being used system-wide, cat /proc/sys/fs/file-nr. The middle number is the one you want, and the third number is the system limit. To increase the limit, you can write a number into /proc/sys/fs/file-max, like this:
    echo 8192 > /proc/sys/fs/file-max
    If you want the change to be persistent, add that command to /etc/rc.d/rc.local so it'll be run at startup.

    And yes, open sockets count towards the total. It's the number of file handles that are limited, rather than the number of actual files that can be opened at once.

Re: System resource limitation question
by kschwab (Vicar) on Jan 07, 2002 at 22:25 UTC
    There are per-user and per-system limits all over the place in linux.

    This system tuning page covers all the bases pretty well, and this page is a quicker pointer to just the open file descriptor limits, which you appear to be hitting.

Re: System resource limitation question
by apprentice (Scribe) on Jan 07, 2002 at 22:15 UTC
    Oops! I forgot to say that both server and client are running RedHat 7.2.

    Aside: How do I "modify this node" rather than just post a reply to it?