kiat has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks,

This is related to my node at Trap errors in entire module.... I've the following code at the beginning (after the shebang line) of my main script to trap compile-time errors in all the modules. During development, the errors are directed to the browser for debugging. In production, the errors are written to a trap error file and a customised error message is displayed on the browser to "enlighten" the user.

#!C:/perl/bin/perl.exe # scriptname: index.pl BEGIN { my $development = 0; if ($development) { use CGI::Carp qw(fatalsToBrowser); } else { use CGI::Carp qw(fatalsToBrowser set_message carpout); open(LOG, ">>datadir/trap_error") or die("Unable to open mycgi_log: $!\n"); carpout(LOG); sub handle_errors { print "<h1>Script Error</h1>"; print qq~<p>An error occurred while processing your request. +This error has been noted and appropriate action will be taken to rec +tify it.</p>~; } set_message(\&handle_errors); } } use Mymodule1; use Mymodule2; my $query = get_param('node') || 'start'; # Rest of code in the main script...
My questions are:

1) Is it necessary to write the errors to a file? Should I just examine the error.log periodically?

2) If the trap error file is necessary, how do I ensure that it will be kept to a certain size so that there's no risk of it becoming bloated?

Replies are listed 'Best First'.
Re: Trap errors at BEGIN
by matija (Priest) on Mar 29, 2004 at 05:46 UTC
    1. I prefer to write the errors to error.log. It avoids making changes while moving the program from testing to production (always a good thing).

      It also puts other relevant information in close proximity to the messages from my program: The other day I was investing a case where one operation was performed, but the next operation was not. The two subroutine calls were one after another, with no logic statements inbetween. I'd still be banging my head against a wall if I hadn't looked at error_log, and seen that Apache was shut down at that point...

    2. There are a whole bunch of programs which are dedicated to keeping log files manageable. The first that comes to mind is logrotate. That one will rotate your logs on a periodic basis, and keep a specified number of old logs zipped and stored. There are others, including ones that decide to rotate based on the file size.

      One advice: it is a good idea to run a script on such a log that filters out the expected, normal messages, and mails everything else to someone in charge of maintaining the script. After the first week or two, this mail should be usualy empty. If it isn't, you're obviously ignoring problems in your script. That is always the sign of trouble.