farhan has asked for the wisdom of the Perl Monks concerning the following question:

Hi Perl Gurus,


Is there any way in PERL to find out if file is being used by any process or file is busy? Could flock help me? If yes how?
OS= Solaris 10

Farhan

Replies are listed 'Best First'.
Re: file used by process
by CountZero (Bishop) on Sep 09, 2010 at 05:38 UTC
    From the docs of flock:
    it waits indefinitely until the lock is granted
    I doubt this is what you want!

    However, check out the Fcntl module, it has a non-blocking version of flock.

    A good introduction to file locking is Mark Jason Dominus' File Locking Tricks and Traps.

    BTW: it is "Perl" for the language or "perl" for the interpreter, but is never, ever "PERL"!

    CountZero

    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

Re: file used by process
by BrowserUk (Patriarch) on Sep 09, 2010 at 06:01 UTC

    Try commands lsof or fuser

Re: file used by process
by cdarke (Prior) on Sep 09, 2010 at 09:35 UTC
    On most UNIXs fcntl and flock file locks are advisory, not mandatory. So getting an flock on a file is useless unless the other process is also using flock. If locking is not being used by anyone then just you using it will have no effect.

    BrowserUK's suggestions are the only ways I can think of, but won't be quick. Downside is that looking at which files are in use is not atomic, someone might grab or release a file in between you inspecting it and then carrying out an action.
Re: file used by process
by GrandFather (Saint) on Sep 09, 2010 at 05:34 UTC

    You could try to open the file for appending. If the open fails then either the file can't be created/opened due to permissions problems (or a bad file name or whatever) or it is locked (maybe depending on the OS). Once opened you can use flock to attempt to grab an exclusive lock (LOCK_EX) and again if that fails it is probably already locked.

    True laziness is hard work
      Thanks Grandfather for replying.

      #!/usr/bin/perl
      open(FH , ">>/var/log/apache2/access.log") or die "cant";

      print FH "This is test\n";

      close(FH);
      But it appends to the file access.log but this file is being used by the process of apache

      root@cu:~# lsof|grep access.log
      apache2 1115 www-data 8w REG 9,4 30362650 2719854 /var/log/apache2/access.log
      apache2 1115 www-data 9w REG 9,4 30362650 2719854 /var/log/apache2/access.log
      apache2 2773 www-data 8w REG 9,4 30362650 2719854 /var/log/apache2/access.log
      apache2 2773 www-data 9w REG 9,4 30362650 2719854 /var/log/apache2/access.log

      If this file is being used by apache I shouldn't have been able to append the file but I appended it

        Which is why I said "or it is locked (maybe depending on the OS)" - it's an OS dependent behaviour. It's also why I went on to suggest you could use flock, although others have pointed out that that may not do quite what you want either.

        True laziness is hard work
        If you think it through, it is entirely correct that Apache allows you to open this file.

        Think what would happen if Apache kept an exclusive lock on this file: nobody would ever be able to even read the file, unless they would have shutdown the webserver!

        Of course writing to the Apache log-file is not advised, but it can hardly be considered Apache's fault that you are trying to destroy its log.

        You might answer that reading a file is different from writing and the locking mechanism should account for that and a shared lock should just do that (the process holding the lock can read/write, all others can only read). Alas, locks are just advisory by nature, so you may ignore them at your own peril.

        CountZero

        A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

Re: file used by process
by jffry (Hermit) on Sep 09, 2010 at 14:59 UTC

    I have a somewhat similar situation, and here is how I solved it. My problem was on Linux, but hopefully there is something to be gained from this solution.

    I only included a snippet from my entire script (which does a few other things). I have not run this snippet by itself, but I did run it with perl -c to make sure the syntax was OK at least. I also had to sanitize a few values before posting it publicly, so I hope that I didn't make an error doing that either.

    #!/usr/bin/perl -w use strict; use warnings; use Data::Dumper; # Mnemonic: "lh" = log file handle. open my $lh, '>', '/tmp/this_script.log' or die "Opening debug log fai +led: $!"; ###################################################################### +######## # # Kill any process still holding a file open in /usr/local/tomcat. # ###################################################################### +######## my @pids = lsof_grep('/usr/local/tomcat'); print $lh scalar localtime; print $lh "\n+++++ Contents of \@pids (lsof_grep) +++++\n"; print $lh Dumper(\@pids); for my $pid (@pids) { # This is the first time a few of these processes are being told t +o shut # down. Thus, we need to send a signal that lets them close their + open # files, which is the entire point of this whole script. kill 'TERM', $pid; } # Give those processes up 1 second per process to quit. sleep @pids; # Run lsof again. We'll send a stronger signal this time to any proce +sses # that still have Tomcat files open. @pids = lsof_grep('/usr/local/tomcat'); print $lh scalar localtime; print $lh "\n+++++ Contents of \@pids (lsof_grep, 2nd time) +++++\n"; print $lh Dumper(\@pids); for my $pid (@pids) { kill 'KILL', $pid; } close $lh; exit 0; sub lsof_grep { my $re = shift; my @lsof_out = qx{/usr/sbin/lsof}; my %openfiles; for my $line (@lsof_out) { if ($line =~ m{$re}) { my @fields = split /\s+/, $line; $openfiles{$fields[-1]} = $fields[1]; } } # %openfiles will have the same PID multiple times. Once for each + open # file it has. However, we don't want the PID repeated in the ret +urn # list. We still need to print the contents of %openfiles for pot +ential # debug purposes, however. print $lh scalar localtime; print $lh "\n+++++ Contents of \%openfile +++++\n"; print $lh Dumper(\%openfiles); # %seen is only used in the next grep to make the expression work, + because # hash keys are unique. my %seen; return grep { !$seen{$_}++ } values %openfiles; }