http://qs1969.pair.com?node_id=204985


in reply to Picking up where you left off..

Second idea was to use some kind of pipe arrangement , where access.log is a special file with a script connected to the other end.

This is a named pipe (FIFO), and might work. It's just a special file that one or more processes can write to, and another process can read from. If you are on a Unix system, check out the manpage for mkfifo for more info, or ask Google.

I would also check out the File::Tail module. It's documentation does a better job of explaining how it works than I can.

--
IndyZ

Replies are listed 'Best First'.
Re: Re: Picking up where you left off..
by submersible_toaster (Chaplain) on Oct 14, 2002 at 07:16 UTC
    :( Sadly I don't think I can wrangle FIFO's to my needs. I read the File::Tail docs (why does it need Time::HiRes?) , which would be cool for a daemon type execution, but hoping I was to only run this script at intervals.

    Closer inspection of Perl InANutShell led me to this...

    use strict; open ( COUNT , '<counter'); my $whence = scalar <COUNT>; chomp $whence; close COUNT; open ( FH , "<logfile" ); # scram to position we wrote last to 'counter' seek FH, $whence , 0; my $line = scalar <FH>; print $line; my $count = tell FH; open ( COUNT , '>counter' ); print COUNT $count; close COUNT;
    Which exhibits the behaviour(s) that I believe are needed, in this test case, the script prints the next single line from 'logfile' and exits - saving it's position in logfile to 'counter'. OK so I had to seed counter with '0' first!!

    This is destined to run on rh7.3, perl5.6.1 - what scares me most is how squid will behave with another process reading from the logfile it's writing too.

      Why are you so dubious about opening the squid log for read access while squid writes to them? People do this sort of thing all the time. For example may people sit with a xterm open doing nothing but tail -f logfile. Just imagine if it was harmful: "Just a sec I will check the log file for diagnostic messages.Oh oops! I have to rotate the log files/stop then restart the daemon!". Yuk.

      If you do start scanning the current log at random intervals starting from the file position that you got up to in the previous scan you will need to take into account the default cron jobs that rotate the logs daily (I think). Have a look at the cron jobs on the machine and (at least on some linux machines) /etc/logrotate.conf.

      --blm--