in reply to Safe to run a constant loop?

What I tend to do when faced with a problem like what you describe (< 1 min intervals, eternal life-span), is combine what you did and cron. In order to avoid nasty effects of eventual memory leaks and flush things, I code an absolute maximum of iterations, to the tune of:
while (++ $tries < $any_number_you_wish) { # You code here } exit;
and then have cron running this daemon periodically. Also, be sure to take a look at Proc::Daemon for detaching from your controlling tty, which is always nice for a daemon to do. You might also want to look at Proc::Watchdog for keeping a flag telling wether your daemon is running or not.

With a bit of code, you can have your daemon update a file containing its pid while it is running. At the beginning of its execution, it may look for this file and attempt a dummy kill to see if it's already there, dying in this case. Otherwise, processing begins for a given number of iterations.

cron could then run your daemon periodically (say, each minute, which would on average cause a 30 seconds outage) to insure it stays up whatever happens. If you go this way, I would defer the detach until basic tests (files and dirs exist, etc) have been performed, so that cron could also be used as a report mechanism.