starbuck has asked for the wisdom of the Perl Monks concerning the following question:

Hi,

I have a script that loops around several hundred times, each time gathering a dataset to measure performance over time.

If this process gets a hard kill (either ctrl-c or server reboot etc), is there a graceful way to "dump the results so far" to file?

Currently I append/open, print, then close the file, for every single iteration, is there a neater way to do this?

Thanks.

  • Comment on How do I dump data to file on hard exit?

Replies are listed 'Best First'.
Re: How do I dump data to file on hard exit?
by Anonymous Monk on Mar 31, 2006 at 13:52 UTC
    Hi,

    You can setup a handler for SIGINT

    $SIG{INT} = \&ctrlc_exit;

    ctrlc_exit{
    print LOGFILE "exiting by control c\n";
    die "program exiting elegently";
    close LOGFILE;
    }


    hope this helps
    Displeaser
      DOH!!!

      Closed the file after dieing, ah well you know what i mean.
      BTW, this will only get ctrl c it wont do a server reboot.

      Displeaser
        The process might get a TERM signal on a reboot so set up a handler for that as well.

        $SIG{TERM} = ...

        Cheers,

        JohnGG

        BTW, this will only get ctrl c it wont do a server reboot.

        Most (all?) Unix OSs will send a TERM signal to every process running when shutting down, and a KILL one if they do not terminate after a while.

      Please set
      $SIG{INT} = 'IGNORE';
      as the first thing in your handler, or else the sub will recurse the first time someone presses ctrl-C while in this sub.

      And, Like you remarked yourself, you still have to actually exit or die at the of the sub, or it'll just resume where it left off, afterwards.

        Thanks, I used the signal handlers. Also the autoflush negated the need to open and close the file each iteration.
Re: How do I dump data to file on hard exit?
by larryl (Monk) on Mar 31, 2006 at 22:29 UTC

    Just wondering: Why do you currently open/print/close the file for each iteration, instead of leaving it open?

    You could put the close in an END block, and then use sigtrap to make sure your END block gets run:

    use sigtrap qw(die normal-signals error-signals);

    Larry

Re: How do I dump data to file on hard exit?
by nimdokk (Vicar) on Mar 31, 2006 at 17:38 UTC
    You might want to look at other Signals as well and capturing those as well. DIE comes to mind at the moment but you can dig up a list of different Signals from Unix man pages (I think kill might list them out but I can't recall at the moment). I haven't played around with capturing signals apart from DIE so your mileage may vary. Some signals you might not be able to capture. For example, I doubt you'd be able to write to a file is someone has yanked the power cord from a server.

    Update: Additional info on signals in my scratchpad.

Re: How do I dump data to file on hard exit?
by sgifford (Prior) on Apr 01, 2006 at 04:39 UTC
    You don't have to close the filehandle to commit its output to disk. First, make sure you're flushing to the disk on every write, either by setting $| or with IO::Handle::flush. Then, to ask the OS to actually write the data to disk, use IO::Handle::sync. This may slow you down significantly, simply because disks are slow compared to memory, but the hit is probably less than a close/open.

    It's up to you to find the right tradeoff between the speed of your code and how much data is lost in the event of an unclean shutdown.

    One other thought: you can install all the signal handlers you want, but until there's a $SIG{TRIPPED_OVER_POWER_CORD}, that won't be foolproof.