in reply to file open problem with scheduled jobs
I suspect the problem is that you are specifying a relative path ie file.log and expecting that after the detach you will still be in the same dir. You are not. Like all good daemons Schedule::Cron is doing a chdir '/' (so you can unmount parts if the FS) ie:
} else { # Child: # Try to detach from terminal: chdir '/'; # <--- open STDIN, '/dev/null' or die "Can't read /dev/null: $!"; open STDOUT, '>/dev/null' or die "Can't write to /dev/null: $!";
As a result if you are in /some/dir and try to write to file.log when you don't detatch you are trying to write /some/dir/file.log whereas after detaching you you would try to write /file.log ie in root where user nobody won't have perms. I further suspect that you are not checking the return value of your open, because with STDERR still open you should be seeing the permission denied message.
open F, ">$fully_qualified_path" or die "Can't write $fully_qualified_ +path $!\n";
FWIW This module has not been updated since 2000 so the author appears to have lost interest given there is a TODO list. No reason not to use it and the code looks well sorted, just don't expect any new features anytime soon.
cheers
tachyon
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: file open problem with scheduled jobs
by nmete (Novice) on Jul 22, 2004 at 14:57 UTC | |
by shemp (Deacon) on Jul 22, 2004 at 16:05 UTC | |
by nmete (Novice) on Jul 22, 2004 at 19:54 UTC |