in reply to Multithreaded process on AIX slow

scunacc,

What version of AIX?

Background: If its AIX 6.1 or later, you may be working against the AIX dispatcher. Unix and Linux treat cores as CPUs, while AIX knows that the first core of the CPU is the fastest and the last core is the slowest. So as an example, let's say you have 8 CPUs with 6 core each. The AIX dispatcher will always want the first core of each CPU working the most and then the next level and so on.

You may want to search on this since I seem to remember that in AIX you may get better performance by limiting the number of active threads, so they execute on the faster cores. Also, I believe I/O bound threads are dispatched on the slower cores and CPU intensive on the faster cores.

I believe the 'xlc' compiler is designed to help C/C++/Fortran programs, but Perl is on it's own. And if your Perl was built with 'gcc' you may be in an even worse situation.

I don't know if this will help, but maybe the information will give you a different approach to the problem.

Regards...Ed

"Well done is better than well said." - Benjamin Franklin

Replies are listed 'Best First'.
Re^2: Multithreaded process on AIX slow
by scunacc (Acolyte) on Nov 13, 2013 at 21:48 UTC

    Hi Ed,

    Appreciate the observations. Interesting… How does it handle assignments for micropartitioning then since it's virtualized further? I have no control over how that is allotted on this system.

    Also - the way the application is designed, I have multiple threads for input and multiple for output. Each either feeds a Q (input) from a client or reads a Q (output) and communicates back to waiting clients. The clients send info. to the server, then sit waiting (as a reverse "server" if you will) for results. The Q's are used to enable the processing threads (of which there are a considerable number in a hierarchy / intercommunicating community performing different related functions) to consume what they want in parallel. When done, they asynchronously dump the results into the output Q's, shared with the output threads. There is no ongoing connection to clients. That is also asynch, the client connection information being carried from input to output Q as part of the SOAP object. The output Q handling threads then contact the client back (acting as a client to the client acting as a server) saying: "Here's the answer".

    The I/O that's binding me here isn't that though, since that processing has worked fine with some other in-machine operations at breakneck speed maxing things out nicely when required ;-) since 2008.

    The problem seems to be multiple net connections using REST as I mentioned. I can still slam things as fast as they will go with hundreds of clients in sequence if I *batch* my REST data - and - as I say - it completes in nearly the same time as the Xeon-based version then. It's still doing the exact same amount of *other* I/O though. I still start the same # of threads. I still have the same number of clients sending the same amount of data. It's just how much data I then send in each REST request - and, I guess, how many consecutive REST requests I'm making as a result. (1 vs. 50).

    So, I/O *per se* isn't binding me. I think what I'm wondering is whether the REST::Client or HTTP::Request modules have any known issues on AIX.

    This is AIX 5.3 - can't upgrade this machine. I have a 6.1 machine available that I will have to build an identical Perl on to test with though.

    Hmmm. Let's see (…logging in sounds…) I built this particular Perl instance with gcc. :-/ Ah - there was a reason for that - I also had to build postgreSQL on this particular machine and have it dynamic link. Was the only way to get them to play nicely with each other.

    Kind regards

    Derek