Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

file open problem with scheduled jobs

by nmete (Novice)
on Jul 21, 2004 at 22:46 UTC ( [id://376400]=perlquestion: print w/replies, xml ) Need Help??

nmete has asked for the wisdom of the Perl Monks concerning the following question:

I use perl Schedule::Cron module to schedule some subroutines. These subroutines contain file operations and the problem is particularly with the file open functions: sysopen, open.

In case of setting detach parameter ( $cron->run(detach=>1) ), so I detach the main scheduler loop from the current process (daemon mode), I can not create or write any files. When the scheduler process is not detached from the current process, there is no problem.

Both in the latter and former case, I attempt to write to the same directory.

I know the PID of the forked scheduler process, and observe that the user of the process is "nobody" (as usual with any http user)

Is this matter of some security issues or what may be any other reason to file open failure?

The cgi program runs on Apache server on solaris system, and I use perl 5.6.1.

Thanks.

Replies are listed 'Best First'.
Re: file open problem with scheduled jobs
by tachyon (Chancellor) on Jul 21, 2004 at 23:08 UTC

    I suspect the problem is that you are specifying a relative path ie file.log and expecting that after the detach you will still be in the same dir. You are not. Like all good daemons Schedule::Cron is doing a chdir '/' (so you can unmount parts if the FS) ie:

    } else { # Child: # Try to detach from terminal: chdir '/'; # <--- open STDIN, '/dev/null' or die "Can't read /dev/null: $!"; open STDOUT, '>/dev/null' or die "Can't write to /dev/null: $!";

    As a result if you are in /some/dir and try to write to file.log when you don't detatch you are trying to write /some/dir/file.log whereas after detaching you you would try to write /file.log ie in root where user nobody won't have perms. I further suspect that you are not checking the return value of your open, because with STDERR still open you should be seeing the permission denied message.

    open F, ">$fully_qualified_path" or die "Can't write $fully_qualified_ +path $!\n";

    FWIW This module has not been updated since 2000 so the author appears to have lost interest given there is a TODO list. No reason not to use it and the code looks well sorted, just don't expect any new features anytime soon.

    cheers

    tachyon

      Firstly thanks for your help... yes, the problem was with the file paths and it works with the absolute path.

      but I am still confused of following:
      my $filename1 = "/var/apache/cgi-bin/user/test.txt";
      my $filename2 = "http://my_site/cgi-bin/user/test.txt";

      open(HANDLE,">$filename") or die "couldnt open $filename for writing $!";

      open function works well with the filename1 but not with filename2.
      What is the difference?
        $filename1 is a unix filesystem path, which is the way that the file *really* is addressed.

        $filename2 is a web address. The web server translates the web address into a unix filesystem filename, based on how you have the web server configured. http://my_site/ is the web address of the web server, the next part, /cgi-bin/, is probably your cgi root, which is set up in your web server configuration, and translates to some absolute base path, in your case it appears to be /var/apache/cgi-bin/. The rest, user/test.txt refers to a specific file on that server.

        So the web server filename translation changes your address to /var/apache/cgi-bin/user/test.txt, which is exactly what $filename1 is.

        So, after all that, the web server automatically does that translation, but perl assumes that you are giving a relative or absolute pathname that is of the form your operating system normally deals with, i.e. it doesnt do the translation that the web server does. If you're much into unix, think about why a command like rm $filename2 wont work right. perl assumes filenames just like the unix shell does.

        The reasons for the web address translation are many, it allows virtual hosting, and it also keeps me from doing something like http://your_server/etc/passwd - which you probably dont want me doing.

        You should probably read up on apache & web server conifguration in general.

Re: file open problem with scheduled jobs
by shemp (Deacon) on Jul 21, 2004 at 23:06 UTC
    It could be that the detached process gets its working directory modified when it gets detached. Are you using relative or absolute paths for the files you are dealing with?

    You can probably easily see the reasons for failure by using the standard:

    my $filename = 'somefile'; open HANDLE, '>', $filename or die "couldnt open $filename for writing $!";
    Of course you should appropriately change the '>' if you are doing other file operations. Any STDERR from cron jobs should be automatically sent to the cron owners email, and that should tell most of the story.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://376400]
Approved by ysth
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (4)
As of 2024-04-19 14:40 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found