As an aside -- why do you think processing the log files in parallel will increase throughput? If the log parsing is I/O bound, you'll only generate more I/O contention and possibly even increase overall execution times. If the process is CPU bound, you will need to run this on a multi processor machine (of whatever flavour is available) to benefit. If it's a single processor machine, you'll only benefit if the parsing is just the right combination of I/O vs CPU bound, and then only if you keep the number of concurrent processes quite low.
So, in short, if you're not running on a multiprocessor machine, make sure that this is not a case of premature optimisation.
| [reply] |
| [reply] |
use Proc::Queue size => 10, qw(run_back all_exit_ok);
while(...) {
my @logfn = get_log_names();
my @pids = map {
run_back { process_log $_ }
} @logfn;
all_exit_ok(@pids)
or print STDERR "some child failed\n";
}
| [reply] [d/l] |
my @files = qw(file1 file2 file3); # your log-files
for (1..scalar(@files)){
next unless(-e $files[$_-1]);
my $pid=fork();
if($pid==-1){
warn($!);
last;
}
if($pid){
$pids{$pid}=1;
}
else{
# do what you want with $files[$_-1]
exit(0);
}
}
while(keys %pids){
my $pid=waitpid( -1, WNOHANG );
die "$!" if $pid == -1;
delete $pids{$pid};
}
| [reply] [d/l] |
| [reply] |
for my $file ( <*.log> ) { # for example...
defined( my $pid = fork ) or die "Unable to fork: $!";
unless ( $pid ) {
# in child
# process $file
exit;
}
}
1 while wait > 0; # avoid zombies
| [reply] [d/l] |
1 while wait > 0; # avoid zombies
won't work on windows systems. The windows implementation uses the negative of the thread ID as the pid, so wait returns negative numbers for all children.
It seems the most cross-platform wait statement is:
1 while wait != -1 ; # avoid zombies
since the main thread is ID 1, and the child threads won't have that ID. And it's what wait natively returns when there are no further child processes.
- j | [reply] [d/l] [select] |