frank1 has asked for the wisdom of the Perl Monks concerning the following question:

the below script works successfully on backup the database

the only problem i have is to remove the files which are modified or uploaded on server 30 minutes ago

i want to backup the database and then remove all files ending .sql which are stored or uploaded on server 30minutes ago

#!/usr/bin/perl use strict; use warnings; use File::stat; use POSIX qw(strftime); my $dbhost = ''; my $dbuser = ''; my $dbname = ''; my $folder = 'backup/'; my $filename = join '.', 'db.backup', time, 'sql'; if ($bp = `mysqldump --host=$dbhost --user=$dbuser --password="dbpassw +ord" $dbname > $folder$filename`) { for my $file(@$folder) { my $time = (stat($file))[9]; my $mod_time = strftime('%M', localtime($time)); if ($mod_time > '30') { unlink glob "$folder/*.sql"; } } } else { die "error"; }

Replies are listed 'Best First'.
Re: unlink files
by Corion (Patriarch) on Aug 07, 2024 at 18:10 UTC

    See also the documentation on the -M operator, which tells you how old (relative to the start of the program) a file is.

    But usually you want to run such a cleanup program optionally with other time settings than just the current time, so -M is not always helpful.

    A very good approach would be to tell us where your current program does not do what you want, and what it does instead.

    This is not a code writing service and we expect you to learn from the documentation and to do the legwork of actually trying things out.

Re: unlink files
by NERDVANA (Priest) on Aug 07, 2024 at 17:34 UTC
    use v5.20; use warnings; use File::Find; use File::stat; ... find(sub{ my $stat= stat $_ or die "stat($_): $!"; say "unlink $_" if time - $stat->mtime > 30*60; }, $folder);

    When you are really sure it unlinks the right files, replace say "unlink $_" with unlink $_.

Re: unlink files
by cavac (Prior) on Aug 08, 2024 at 13:27 UTC

    There is more than one way to do this, and the "correct" way depends on the minute details of your setup. Here's the code i use in my framework (this isn't exactly from the most modern module i have, but it works):

    # Use the object oriented version of stat use File::stat; ... sub _dircleaner($self, $dir, $maxagedays) { my $reph = $self->{server}->{modules}->{$self->{reporting}}; $reph->debuglog("Scanning $dir for cleaning"); my @todelete; my $deletes = 0; my $ok = 1; my $dfh; if(!opendir($dfh, $dir)) { $reph->dblog("DIR_CLEANER", "Can't open '$dir'"); $dbh->commit; $reph->debuglog("Can't open $dir"); $ok = 0; goto finishcleaning; } my $fcount = 0; my $maxage = $maxagedays * 3600 * 24; # Convert days to seconds my $now = time(); while((my $fname = readdir($dfh))) { next if($fname eq "." || $fname eq ".."); # FIXME FOR SUBDIRS! CLEAN UP SUBDIRS, THEN REMOVE ALL EMPTY D +IRS next if(!-f "$dir/$fname"); my $fileage = stat("$dir/$fname")->mtime; my $age = $now - $fileage; next if($age <= $maxage); push @todelete, "$dir/$fname"; $fcount++; } closedir($dfh); if($fcount) { $reph->debuglog("Cleaning $fcount file(s) in $dir"); foreach my $fname (@todelete) { if(unlink $fname) { $deletes++; $reph->debuglog(" ...deleted $fname"); } else { $ok = 0; $reph->debuglog("Failed to delete $fname"); } } $reph->debuglog("Deleted $deletes file(s)."); } finishcleaning: return $ok; }

    Basically, i read the whole directory in and remember the files i want to delete, then delete them one-by-one. The relevant steps for filtering the files is:

    # Convert age (in days) to seconds my $maxage = $maxagedays * 3600 * 24; # Convert days to seconds # get the "last modified" timestamp of the file in SECONDS (unix times +tamp) my $fileage = stat("$dir/$fname")->mtime; # Current time in seconds (unix timestamp) my $now = time; # Age in seconds my $age = $now - $fileage; # Ignore file if it is too new next if($age <= $maxage); # Add files to the "@todelete" list with full filename push @todelete, "$dir/$fname";

Re: unlink files
by karlgoethebier (Abbot) on Aug 07, 2024 at 13:53 UTC

      Any demo, so i can try

        karl@rantanplan:~/src/myC$ touch nose karl@rantanplan:~/src/myC$ find . -type f -mmin 1 ./nose karl@rantanplan:~/src/myC$ find . -type f -mmin 3 -exec rm -i {} \; rm: remove regular empty file './nose'?

        Update: Add -size 0 for testing.

Re: unlink files
by Anonymous Monk on Aug 07, 2024 at 17:32 UTC

    Did you actually try to run the script you exhibited? When I try I get Global symbol "$bp" requires explicit package name (did you forget to declare "my $bp"?) at clean-sql.pl line 13.. Fixing that gives me Can't use string ("backup/") as an ARRAY ref while "strict refs" in use at clean-sql.pl line 15.. The latter is caused by dereferencing a scalar that does not contain an array reference. I have no idea what the intent of that line is.

    A pure-Perl implementation of the loop would go something like this:

        my $thirty_minutes_ago = time - 30 * 60;
        for my $file ( glob( "$folder*.sql" ) ) {
            if ( stat( $file )->mtime() < $thirty_minutes_ago ) {
                unlink $file or die "Failed to unlink $file: $!";
            }
        }
     

    The above takes advantage of the fact that you used File::stat.

      thanks for this example, thanks so much

Re: unlink files
by perlfan (Parson) on Aug 24, 2024 at 17:36 UTC
    This is a good example of when to use Bash, the granularity of what Perl is giving you is unnecessary and causes one to overcomplicate the task; so there's no need to introduce it when the shell gives you much more direct access to what you're doing. You can do this with a short script that simply uses mysqldump and find.

    Or if you want to get really clean, pipe the dump directly into gzip -c to get your sql.gz (no temporary files to clean up). gzip also can delete any files it's zipping, leaving you with just the .gz. Or if you have more than on, tar -cz. At least create yourself a .my.cnf file chmod'd 600 so you don't have to mess with password or put the password directly in the command (this shows up fully in the process list).

    If you insist on using Perl for this, then at least make it interesting and use something like Archive::Tar::Builder to manage your dumps.

    It's also perfectly okay to treat Perl scripts as utilities also that are piped together using Bash, some people tend to think you have to do it all in Perl as a monolithic thing. Perl works just as well to create small focused tools that are piped together (via STDIN and STDOUT) and managed with a Bash script.