Do you mean you would really like the progress of all current jobs to be displayed at the same time? Then maybe you want a loop that reports current status on all entries in the ".draktrack" file, whereas your "CheckStat()" function watches a single job until it finishes before checking the next one.
Consider the following alternative -- each time you run this, it will produce one line of status output for each job in your .dractrack file (at least, I hope so; I haven't tested it :P) Now, just decide how frequently you want to get an update, e.g. once every 5 sec, and run it that often.
(If you have a lot of jobs, the first ones in the list will scroll by unless you pipe the output to "less" or something equivalent. If the jobs stick around a good while, you could format the reports as html, redirect to a file, and (re)load it in your browser whenever you feel like it.)
Given that the jobs you are monitoring appear to be very well organized, there is probably something in ".draktrack" that will always uniquely identify each job in the list. On that assumption, I'm using a hash (assuming "rundir" works as a key field) to hold the presumably static information from the "jxrun.stg" file for each job. This is in case later on I might decide to put the "main" part inside a while loop with a sleep call at each iteration.
use strict; my @list; my %jobs; # start of main: open( DRAC, ".dractrack" ) or die "Can't open .dractrack\n"; @list = <DRAC>; close DRAC; my $ndone = 0; for ( @list ) { chomp; my ($type,$primary,$rundir,$logfile,$host) = split; if ( not exists( $jobs{$rundir} )) { open( STAGE, "$rundir/jxrun.stg" ); my @stage = <STAGE>; close STAGE; $jobs{$rundir}{term} = (split(/\b/, pop @stage))[8]; } my $atstage = CheckStage( $rundir, $logfile ); if ( $atstage == $jobs{$rundir}{term} ) { print "$type job $primary on $host -- DONE"; $jobs{$rundir}{done}++; $ndone++; } else { printf("Cell: %s Verifying: %s at stage %3d of %3d -- %2.2f", $primary, $type, $atstage, $jobs{$rundir}{term}, 100 * $atstage / $jobs{$rundir}{term} ); } } # write a "current" version of ".dractrack", if necessary. # WATCH OUT! You REALLY need a semaphore or some other file # locking mechanism here (check Sean Burke's article about # semaphores in the most recent Perl Journal: # http://www.sysadminmag.com/tpj/) if ( $ndone ) { open( DRAC, ">.dracnew" ) or die "Can't rewrite .dractrack\n"; for ( @list ) { my $dir = (split)[2]; print DRAC "$_\n" unless $jobs{$dir}{done}; } close DRAC and rename ".dracnew", ".dractrack"; } # end of main sub CheckStage { my ($path, $log) = @_; open( LOG, "$path/$log" ); my @lines = <LOG>; close LOG; $_ = pop @lines until ( /AT STAGE:/ ); (split)[3]; }
In reply to Re: Is this the most efficient way to monitor log files..
by graff
in thread Is this the most efficient way to monitor log files..
by Rhodium
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |