Something like this might work: opendir(DIR,$archivedir) || die "Could not open $archivedir:$!\n";
foreach (readdir(DIR)) { # put all the non-dir files into array
push(@files,"$archivedir/$_") if -f "$archivedir/$_";
}
@files=sort {(-M $a)<=>(-M $b)} @files;
Note that this is not the most efficient possible solution. If you have a lot of files in that directory, the sort routine will be quite busy checking each file's modification time multiple times while it sorts.
If you have just a few files there, it doesn't matter. If you have hundreds, you might want to apply the Schwartzian transform to that sort. | [reply] [d/l] |
I wouldn't copy files around... that'd take up far to much space and time, if the log files are big — log files usually are.
A second problem is in file caching: the webserver might refuse to serve up the new file, even after it has been updated, if it delivered the "same" document somewhat earlier. A server like Apache acts as if it has a in-memory file cache. I think it must have.
A simple, fine working solution for the latter is to serve up the logs dynamically, using a script, even if all it does is just read the contents of a file and pass them along to the browser.
So... If we're already using a script to show the logs... Why not filter the existing logs while displaying them? I'm thinking of a way to timestamp the actions, and simply skipping the actions in the log that are too recent. For example, Apache's server logs include a timestamp per line.
Under the assumption that the log gets appended to, as is commonly the case, and there is no such thing as a timestamp in the data, you can save the current time and log file length whenever an event occurs, in a separate index file. That way, you can figure out where to stop reading the log, based on what tell tells you. Anything beyond that threshold is more recent. | [reply] |