I have a script which looks for world writable files and writes any that are found to an output file or, if it is listed in a file containing excluded files, ignore it and move on to the next. One request for the script is to have it ignore an entire directory.

This is a problem because we have an application installed that (for some unknown reason) sets every...single...file in all of its directories as world writable. The issue with File::Find::prune is that it will only skip the directory a file is found in. The app in question actually has several sub-directories with varying depths.

For instance:

/opt/app/bin/.../* /opt/app/config/.../.../* /opt/app/log/.../.../.../* /opt/app/tmp/*

What I'd rather do is simply tell the script to skip the entire /opt/app directory rather than evaluate a file, determine if it is supposed to be ignored, then set File::Find::prune. Not only would it vastly simplify things, but it would eliminate the need to place a file from every single sub-directory into the excludes list for at least the one evaluation. Additionally, the base directory (/opt/app) does not have any files. Only sub-directories. I wouldn't be able to simply place the one file in the excludes list and have it be the basis for File::Find::prune.

This is the script as I have it now:

#!/usr/bin/perl use warnings; use strict; use Fcntl ':mode'; use File::Find; no warnings 'File::Find'; no warnings 'uninitialized'; my $dir = "/var/log/tivoli/"; my $mtab = "/etc/mtab"; my $permFile = "world_writable_files.txt"; my $tmpFile = "world_writable_files.tmp"; my $exclude = "/usr/local/etc/world_writable_excludes.txt"; my $mask = S_IWUSR | S_IWGRP | S_IWOTH; my (%excludes, %devNums); my $errHeader; # Compile a list of mountpoints that need to be scanned my @mounts; open MT, "<${mtab}" or die "Cannot open ${mtab}, $!"; # We only want the local mountpoints while (<MT>) { if ($_ =~ /ext[34]/) { chomp; my @line = split; push(@mounts, $line[1]); my @stats = stat($line[1]); $devNums{$stats[0]} = undef; } } close MT; # Build a hash from /usr/local/etc/world_writables_excludes.txt if ((! -e $exclude) || (-z $exclude)) { $errHeader = <<HEADER; !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! !! !! /usr/local/etc/world_writable_excludes.txt is !! !! is missing or empty. This report includes !! !! every world-writable file including those which !! !! are expected and should be excluded. !! !! !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! HEADER } else { open XCLD, "<${exclude}" or die "Cannot open ${exclude}, $!\n"; while (<XCLD>) { chomp; $excludes{$_} = 1; } } sub wanted { my @dirStats = stat($File::Find::name); # Is it excluded from the report... return if exists $excludes{$File::Find::name}; # ...in a special directory, ... return if ($File::Find::name =~ /^\bsys\b|\bproc\b|\bdev\b$/); # ...a regular file, ... return unless -f; # ...local, ... return unless (exists $devNums{$dirStats[0]}); # ...and world writable? return unless ($dirStats[2] & $mask) == $mask; # If so, add the file to the list of world writable files print(WWFILE "$File::Find::name\n"); } # Create the output file path if it doesn't already exist. mkdir($dir or die "Cannot execute mkdir on ${dir}, $!") unless (-d $di +r); # Create our filehandle for writing our findings open WWFILE, ">${dir}${tmpFile}" or die "Cannot open ${dir}${tmpFile}, + $!"; print(WWFILE "${errHeader}") if ($errHeader); find(\&wanted, @mounts); close WWFILE; # If no world-writable files have been found ${tmpFile} should be zero +-size; # Delete it so Tivoli won't alert if (-z "${dir}${tmpFile}") { unlink "${dir}${tmpFile}"; } else { rename("${dir}${tmpFile}","${dir}${permFile}") or die "Cannot rename + file ${dir}${tmpFile}, $!"; }

As a suggestion for optimization I was told the wanted sub should look like this:

sub wanted { my @dirStats = stat($File::Find::name); # Is it excluded from the report... if (exists $excludes{$File::Find::name}) { $File::Find::prune=1 if (-d _); return; } # ...in a basic directory, ... if ($File::Find::name =~ /^\bsys\b|\bproc\b|\bdev\b$/) { $File::Find::prune=1 if (-d _); return; } # ... not a regular file, ... return unless -f _; # ...local, ... return if (exists $devNums{$dirStats[0]}); # ...and world writable? my $protection = $dirStats[2]; my $writemask = (S_IWUSR | S_IWGRP | S_IWOTH); return unless $writemask == $protection & $writemask; # If so, add the file to the list of world writable files print(WWFILE "$File::Find::name\n"); }

After messing with this I found that it doesn't do what I want and is where the my notion of adding every single sub-directory comes from.

What should I be doing to ensure the /opt/app directory is never traversed and as soon as it is encountered should be skipped entirely? I imagine it would be a matter of running at least one test to determine if a file's base directory is /opt/app.


In reply to How do I ignore an entire directory using File::Find? by theillien1

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.