in reply to Iteration through large array using a N number of forks.

Here is an example which may work for you, based on some snippets Abigail posted awhile ago. My example:
#!/usr/bin/perl use warnings; if ($#ARGV < 0){@ARGV = qw(a b c d)} &afork (\@ARGV,4,\&mysub); print "Main says: All done now\n"; sub mysub{ my $x = $_[0]; system "mkdir dir$x"; chdir "dir$x"or die $!; for ($i=1;$i<10;$i++) { system "touch $i-$x"; # open3(OUTPUT, INPUT, ERRORS, cd dir$x;make clean; make all); #<code to process the output of the make commands and store into log +files> } } ################################################## sub afork (\@$&) { my ($data, $max, $code) = @_; my $c = 0; foreach my $data (@$data) { wait unless ++ $c <= $max; die "Fork failed: $!\n" unless defined (my $pid = fork); exit $code -> ($data) unless $pid; } 1 until -1 == wait; } #####################################################
The original old post from Abigail
#!/usr/bin/perl #by Abigail of perlmonks.org #Some times you have a need to fork of several children, but you want +to #limit the maximum number of children that are alive at one time. Here + #are two little subroutines that might help you, mfork and afork. They + are very similar. #They take three arguments, #and differ in the first argument. For mfork, the first #argument is a number, indicating how many children should be forked. +For #afork, the first argument is an array - a child will be #forked for each array element. The second argument indicates the maxi +mum #number of children that may be alive at one time. The third argument +is a #code reference; this is the code that will be executed by the child. +One #argument will be given to this code fragment; for mfork it will be an + increasing number, #starting at one. Each next child gets the next number. For afork, the + array element is #passed. Note that this code will assume no other children will be spa +wned, #and that $SIG {CHLD} hasn't been set to IGNORE. mfork (10,10,\&hello); sub hello{print "hello world\n";} print "all done now\n"; ################################################### sub mfork ($$&) { my ($count, $max, $code) = @_; foreach my $c (1 .. $count) { wait unless $c <= $max; die "Fork failed: $!\n" unless defined (my $pid = fork); exit $code -> ($c) unless $pid; } 1 until -1 == wait; } ################################################## sub afork (\@$&) { my ($data, $max, $code) = @_; my $c = 0; foreach my $data (@$data) { wait unless ++ $c <= $max; die "Fork failed: $!\n" unless defined (my $pid = fork); exit $code -> ($data) unless $pid; } 1 until -1 == wait; } #####################################################

I'm not really a human, but I play one on earth. flash japh

Replies are listed 'Best First'.
Re^2: Iteration through large array using a N number of forks.
by Spesh00 (Initiate) on Feb 22, 2005 at 21:47 UTC
    Wow. Excellent, thanks everyone! Extremely appreciated. As a bonus question.. Would it be all that difficult to also simply have those threads sleep instead of dying outright, and then fire back up to reiterate through the file? I'm just pondering what the overhead of having the overall script continually fire up every x minutes and spawning 30 kids, vs just having it sleep with it's 30 kids and then reiterate?
      Would it be all that difficult to also simply have those threads sleep instead of dying outright, and then fire back up to reiterate through the file?

      If you use threads, then that is no problem whatsoever :)


      Examine what is said, not who speaks.
      Silence betokens consent.
      Love the truth but pardon error.
        Reading through some other threads it seems that LWP::Parallel might do an equally effective job.. What would be the smallest chunk of code to simply request an array of urls (with a timer) to measure their response time with LWP::Parallel? The examples I've found are rather thick for what I'm trying to accomplish.