x12345 has asked for the wisdom of the Perl Monks concerning the following question:
Hello perl experts,
I would like to learn to do the following things with perl:
--open parallel process on a lot of remote machines,
--get back the result,
--and kill the process which cannot succeed.
The senario is:
--There are 1000 linux machines, on each machine there is already a shell script used to check the machine's memory, disk.. etc.
--A perl script on a server uses 'open pipe', it opens 1000 filehandel to go to each machine, runs the shell script and brings back the results. The shell script normally needs just 5-15s to finish, so I can get back the results soon. But sometimes teh shell script can be stucked because of the problem on the machine(eg, if disk problem, df command will stay there for ages), in this case, I should close the filehandle after waiting for some times eg,300s
My two main questions are:
1.In the script I have, it sets all the filehandels to non_blocking and use 'sysread' to read the output. Why use non-blocking? What about use while (<FH>) {push @results,$_;}. What is the difference?
2.How to do the timeout for the filehandle?
Thanks in advance!
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: parallel process on remote machines,read results and hanle timeout of those process
by BrowserUk (Patriarch) on Oct 31, 2014 at 02:01 UTC | |
by x12345 (Novice) on Oct 31, 2014 at 10:50 UTC | |
by BrowserUk (Patriarch) on Oct 31, 2014 at 12:55 UTC | |
by x12345 (Novice) on Oct 31, 2014 at 15:36 UTC | |
by BrowserUk (Patriarch) on Oct 31, 2014 at 15:56 UTC | |
| |
|
Re: parallel process on remote machines,read results and hanle timeout of those process
by Anonymous Monk on Oct 31, 2014 at 00:27 UTC |