Re: Sequential processing with fork.
by stevieb (Canon) on Aug 04, 2015 at 19:33 UTC
|
Here's an example using Parallel::ForkManager that has come in very handy lately for these types of questions. It'll process $max_forks at a time. In the for loop, I've put in the number of clients (33). It'll process 10 at a time, calling do_something() for each one until all 33 are exhausted.
#!/usr/bin/perl
use warnings;
use strict;
use Parallel::ForkManager;
my $max_forks = 10;
my $fork = new Parallel::ForkManager($max_forks);
# on start callback
$fork->run_on_start(
sub {
my $pid = shift;
}
);
# on finish callback
$fork->run_on_finish(
sub {
my ($pid, $exit, $ident, $signal, $core) = @_;
if ($core){
print "PID $pid core dumped.\n";
}
}
);
# forking code
for my $client (1..33){
$fork->start and next;
do_something($client);
sleep(2);
$fork->finish;
}
sub do_something {
my $client = shift;
print "$client\n";
}
$fork->wait_all_children;
-stevieb
UPDATE: I can't say for sure, but after removing the sleep statement, the output hints at the fact that it'll add more into the queue before previous ones are finished as long as the max count doesn't go over 10. I'm not 100% sure of this though.
UPDATE 2: According to the Parallel::ForkManager docs, it does indeed throw another proc onto the heap after each one finishes. This number is configurable:
wait_for_available_procs( $n )
Wait until $n available process slots are available. If $n is not
+given, defaults to 1.
| [reply] [d/l] [select] |
|
|
That's the most useless run_on_finish possible. Worse than none at all.
#!/usr/bin/perl
use warnings;
use strict;
use Parallel::ForkManager qw( );
use constant MAX_WORKERS => 10;
sub work {
my ($client) = @_;
print("$client start...\n");
sleep(3 + int(rand(2)));
print("$client done.\n");
}
{
my $pm = new Parallel::ForkManager(MAX_WORKERS);
# Optional.
$pm->run_on_finish(sub {
my ($pid, $exit, $ident, $signal, $core) = @_;
if ($signal) {
print("Client $ident killed by signal $signal.\n");
}
elsif ($exit) {
print("Client $ident exited with error $error.\n");
}
else {
print("Client $ident completed successfully.\n");
}
});
for my $client (1..33){
$pm->start($client) and next;
work($client);
$pm->finish();
}
$pm->wait_all_children();
}
| [reply] [d/l] [select] |
|
|
| [reply] |
|
|
Yes, that's exactly what I want...as soon as ONE process ends, start another one!!
I just can't ever have more than ten running simultaneously...
~~~~~~~~~ I'm unique, just like everybody else! ~~~~~~~~~
| [reply] |
|
|
Stevieb,
Thank you! Your code seemed to point me in the right direction and do just what I needed, (after trying several other examples) the only problem is for what ever reason it doesn't progress to the next process after the initial group is started. As I mentioned earlier I'm actually calling on another file and passing it the client ID via @ARGV, so it's not like it's simply a lexical scope that's being forked. I've tried using exec, system to no avail. After one of the original group processes ends, it doesn't start another one to replace it or create a new one to run the next client. After all the initial processes finish the main program just ends. :-( Any idea what could be going on?
Here's what I'm working with..
#!/usr/bin/perl
use warnings;
use strict;
use Uber qw / client_list /;
use Parallel::ForkManager;
my $max_forks = 2; # Changed to 2 just to see if it would go to the ne
+xt one...
my $clients = client_list();
my $fork = new Parallel::ForkManager($max_forks);
for my $client ( @$clients ){
$fork->start( $client->{id} ) and next;
do_something( $client->{id} );
$fork->finish;
}
sub do_something {
my $client = shift;
system( "perl", "C:/Path/To/Folder/process.pl", "$client" );
}
$fork->wait_all_children
| [reply] [d/l] |
|
|
| [reply] |
|
|
| [reply] [d/l] |
Re: Sequential processing with fork.
by SuicideJunkie (Vicar) on Aug 04, 2015 at 19:41 UTC
|
It looks like you've got clients forking more clients, and that goes out of control fast.
ISTM that what you want is:
- A master routine to:
- Fork off ten clients
- Add to the queue of tasks until done
- Queue up ten special 'terminate' tasks
- A client routine (ten copies made by the master) to:
- Shift a task out of the queue
- Do the task
- Terminate if the task is a special terminate task
- Sleep if there are no tasks in the queue at the moment
Keeping your forked code sterile will help prevent you from accidentally making a fork bomb.
| [reply] |
|
|
| [reply] |
|
|
| [reply] |
|
|
Yes, I would definitely not want a fork bomb.
The process that runs for each client is actually a 3000 line script that uses .NET to automate Internet Explorer in "private" mode, (so they can all run together without cookie_jar issues) So there's no room for more processes than needed.
As said previously I've been just checking the count and sleeping for roughly an hour to let all 10 finish, then starting the next group, but I often see only 1 or 2 left running and realized I could significantly speed it up by ensuring there were ALWAYS 10 running..
I'll incorporate all you guys advice and post the results to show solution.
Thank you!!
Oh yeah, I obviously don't want to start my process with system() anymore because that creates another fork right?
Could I just use backticks to call the other script and pass the client ID as $ARGV[0]?
Any thoughts?...
~~~~~~~~~ I'm unique, just like everybody else! ~~~~~~~~~
| [reply] |
|
|
| [reply] |
Re: Sequential processing with fork.
by Anonymous Monk on Aug 04, 2015 at 21:22 UTC
|
#!/usr/bin/perl
# http://perlmonks.org/?node_id=1137416
use strict;
use warnings;
$| = 1;
for my $i (1..33)
{
$i > 10 and warn("waiting...\n"), wait;
fork or warn("client $i started\n"), sleep(60), die("client $i ended
+\n");
sleep 5;
}
1 while wait > 0; # reap the rest
warn "all clients finished\n";
replace the "fork or warn..." with: fork or exec("yourprocess"), die "exec failed $!";
| [reply] [d/l] |
|
|
| [reply] |
Re: Sequential processing with fork.
by Anonymous Monk on Aug 04, 2015 at 21:43 UTC
|
By far the easiest thing to do is to put your 33 requests into a shared queue, then spawn however-many workers (10) you need to have. Each worker pops a request off the queue until there are no more; then, it dies. | [reply] |
|
|
| [reply] |
|
|
use forks;
use Thread::Queue qw( ); # 3.01+
use constant NUM_WORKERS => 10;
sub work {
my ($client) = @_;
print("$client start...\n");
sleep(3 + int(rand(2)));
print("$client done.\n");
}
{
my $q = Thread::Queue->new();
for (1..NUM_WORKERS) {
async {
while (my $client = $q->dequeue()) {
work($client);
}
};
}
$q->enqueue($_) for 1..33;
$q->end();
$_->join() for forks->list();
}
| [reply] [d/l] |
|
|
|
|
|
|
|
|
|
|
|
|
|