in reply to tail -f multiple nfs files

I am not sure if the for loop you have is doing exactly what it should. I don't know for sure since I can't run it to duplicate what you are doing. I did look on the Perl Monk site and found this. Have you tried it with that module?
I will be honest and say that I am slightly confused by the opening of a new file during the processing or at least that is what it looks like to me. That may effect the loop as well.
if (-e $NEXT_FILE){ sleep 3; #SLEEP TO ALLOW OS TO SYNC UP close(ACCT); $DETAIL_FILE = $NEXT_FILE; open (ACCT, "$DETAIL_FILE") || die "couldn't open f +ile: $!\n"; $NEXT_FILE = get_next_file_name($DETAIL_FILE); next; #NEXT FILE DOESN'T EXISTS, SO SLEEP AND TRY AGAIN }
That is the block that worries/confuses me. I notice on the first file you open prior to the for loop you remove the first line that may be invalid. I don't see any code verifying the other lines coming into the script. That is no test for the correct number of array elements. Could there be bad lines in the data feed that aren't accounted for?

Replies are listed 'Best First'.
Re: Re: tail -f multiple nfs files
by Anonymous Monk on Feb 02, 2002 at 01:43 UTC
    Thanks for the response. I know this code works, and it works well when only running one instance (one file). However, when I run two instances using two nfs mounted servers, I get the record loss. The block you pointed out is a daily rollover feature. If it detects an EOF while tailing, it will check a file for the next day exists, and then starts tailing that. This way, the script will follow logs that rotate via day.
      Move your for loop into a sub routine. Test for a file change and if the condition is true call the sub again from inside.