kulls has asked for the wisdom of the Perl Monks concerning the following question:

Greetings,
my $fullpath="/tmp/dir*"; eval { @size=split("\n",`du -sk $fullpath | cut -f1`); }; $sum=0 if($@);
here,
 eval is not catching the script error when $fullpath is failed.Either i have to check the pattern  $fullpath before executing a shell script or capture the error after executing the command. How can i do this?
- kulls

Replies are listed 'Best First'.
Re: Capturing shell command error
by BUU (Prior) on Jan 07, 2006 at 09:55 UTC
    A) eval isn't supposed to capture those sort of errors.
    B) if you read the friendly man pages it mentions that backticks sets the $! error variable, so you should be checking that.
    C) Why are you bothering to fork a new process and execute du? What's wrong with the built in perl operators for determing file size? Like -s?
      i understand what you're saying.
      my $str=-s "upload.log"; print $str;
      How can i calculate the size for multiple files without looping like,
      i have upload1.log, upload2.log, upload3.log.
      how can i get the sizes without looping ?. How can i use upload* pattern in the -s option ?. I tried but it's not return anything.
      Can you please ?
      - kulls
        @size = map { -s $_ } glob("upload*.log");
Re: Capturing shell command error
by Tanktalus (Canon) on Jan 07, 2006 at 16:31 UTC

    As I've said before, using perl as a shell script language is not really the best use of perl, nor your CPU. Especially when the pure-perl solutions can be just as fast (or faster) and easier to get right.

    Check out cog's module, Filesys::DiskUsage. It can do this much more simply, and much more perlishly.

    use Filesys::DiskUsage qw/du/; my $fullpath = '/tmp/dir*'; my $sum = du({'sector-size' => 4096, 'symlink-size' => 4096}, glob $fu +llpath);

    I'm using a size of 4096 above as that seems to be a normal size nowadays for some reason. If you want the individual entries' sizes, you can also pass in 'make-hash' => 1 as an option in the hashref of options. Then the answers should come back in a hash where the key is the file/directory, and the value is the size.

Re: Capturing shell command error
by graff (Chancellor) on Jan 07, 2006 at 16:32 UTC
    If your original "du" command is covering a lot of sibling directories that match "/tmp/dir*", and if each of those has a any amount of subdirectory structure, then I would probably stick with your backtick approach, but it can be very simple:
    my @size = `du -sk /tmp/dir* | cut -f1`;
    Note that in a list context (e.g. assigning to an array), the backticks will split on line breaks for you, and retain the line-feed at the end of each line (each element of the array).

    If the "du" fails there, @size will be empty, but the error report from du will go to your STDERR. If you want to catch the error report, and you are using a bourne-style shell (as opposed to a csh-style shell), you can redirect the shell's stderr, as explained at length in the description of "qx" (backtick opertator) in "perldoc perlop":

    my @size = `du -sk /tmp/dir* 2>&1 | cut -f1`;
    This will put the error message as the sole element of @size. Then again, if you somehow get warnings on some files/paths, like "permission denied", these will be mixed in with size numbers for paths that had no problems -- which makes @size kind of a mess... So you could redirect the shell's stderr to a file (if you want to read it later from the perl script) or just redirect to /dev/null (if you don't need to read it).
Re: Capturing shell command error
by planetscape (Chancellor) on Jan 08, 2006 at 01:03 UTC
Re: Capturing shell command error
by Anonymous Monk on Jan 07, 2006 at 16:11 UTC