RazorbladeBidet has asked for the wisdom of the Perl Monks concerning the following question:
Problem:
A program I am writing should only be run once from any particular directory (it's execution in other directories is unsupported/ignored).
Currently I'm creating a lock file and checking that to see if the program can run. Any subsequent executions of the program should fail.
Now there's a necessity to queue the offending processes and allow execution on a FIFO basis. However, a simple sleep while locked won't work because they can be executed out of order.
Is there a simple way to ensure execution stays in order? This is a CLI/cron-based job running on UNIX. (Perl 5.8.0)
--------------
"But what of all those sweet words you spoke in private?"
"Oh that's just what we call pillow talk, baby, that's all."
Re: Locking/unlocking program execution using a queue
by polettix (Vicar) on Mar 30, 2005 at 14:52 UTC
|
Use a FIFO :)
(This solution may not scale well if there are many directories, but you'll be able to adapt to your case)
Create a fifo (mknod -p fifoname) and have a script continuosly read from it. If there's nothing in the FIFO the process won't consume resources.
Programs to run are simply inserted in the FIFO as strings; in the cron job, you substitute the script execution with a write into the FIFO, i.e. you prepend echo and append a redirection to the FIFO. The reader script will extract and execute them, via some eval mechanism, or fork(), or whatever you like.
There are tons of theme variations, but I think this should give you the idea.
Flavio
Don't fool yourself.
| [reply] [d/l] |
|
| [reply] |
|
You can take into account the accidental kill by putting some monitor script in the crontab itself - but you would have an explosion of scripts :)
My previous solution was clean in the sense that it voids active waits. If you don't bother having slightly active waits, you could keep a queue file in each directory, with write access protected via a lock, in which you append your 'name' (e.g. pid number), then keep controlling until it's your turn. When you're done, you simply remove yourself from the top of the file (using the same locking to prevent multiple writings). In this way, the access to the queue file should be rather quick, then a check every second should suffice without slowing your machine to a crawl.
Note that if you use PIDs, you can also check if the current first-in-the-list is alive, just to prevent infinite locking by a badly dead script.
Flavio
Don't fool yourself.
| [reply] |
|
|
Re: Locking/unlocking program execution using a queue
by demerphq (Chancellor) on Mar 30, 2005 at 15:35 UTC
|
Obviously you can use a database for this. It might be overkill though, but using transactions and the right isolation level can make for a very nice queue.
I guess an alternative is to have a directory that is used for locking. The directory would contain a mutex lock file that is used to show that a job is already running as well as a file per job queued up (you could use numeric increasing filenames to ensure the jobs are processed in order.) The process would look like this:
- Process starts, tries to get lock on mutex file.
-
If the lock is obtained the directory is scanned for job files. If job files exist then the path the process was originally tasked to perform is written to the directory as a job file and the first job in the queue is performed.
-
If the lock fails then there is already a task running, so simply write a job file and exit.
-
Directory is processed.
-
Job file is deleted.
-
Prior to termination and the relinquishing of the lock the process rescans for pending job files. If any are present it processes them in order until there are no jobs pending.
Anyway, thats probably what i would do if I wansnt going to to use a DB.
| [reply] |
|
| [reply] |
|
| [reply] |
|
I've done exactly that, but if you use the file's last modified time, then you don't need to worry about the filenames for the jobs as long as they are uniq.
| [reply] |
Re: Locking/unlocking program execution using a queue
by gam3 (Curate) on Mar 30, 2005 at 23:24 UTC
|
The code below will do what you want.
Several things could be done to improve the code: 1) the pid could be output with the queue numer ($lock) so that the kill 0 recomened by RazorbladeBidet could be used; 2) better exit handling is needed so that improvement 1 is not needed; 3) the queue (countfile) could have a more robust structure.
use strict;
use Fcntl ':flock';
use Time::HiRes qw ( usleep );
sub check_queue {
my $lock = shift;
my $top;
open(FILE, "<countfile") || die;
flock(FILE, LOCK_SH);
seek(FILE, 0, 0);
$top = readline FILE;
flock(FILE, LOCK_UN);
close FILE;
return $lock == $top;
}
sub add_to_queue {
my $lock = 0;
if (open FILE, "+<countfile") {
flock(FILE, LOCK_SH);
seek(FILE, 0, 0);
for (readline FILE) {
$lock = $_;
}
$lock++;
} else {
open(FILE, ">>countfile") || die;
}
flock(FILE, LOCK_EX);
seek(FILE, 0, 2);
printf FILE "%d\n", $lock;
flock(FILE, LOCK_UN);
close FILE;
return $lock;
}
sub remove_from_queue {
my $lock = shift;
my @procs = ();
if (open FILE, "+<countfile") {
flock(FILE, LOCK_SH);
seek(FILE, 0, 0);
for (readline FILE) {
if ($_ == $lock) {
next;
}
push @procs, $_;
}
flock(FILE, LOCK_EX);
seek(FILE, 0, 0);
truncate(FILE, 0) || die;
print(FILE @procs);
flock(FILE, LOCK_UN);
close FILE;
} else {
die;
}
}
my $done = 0;
my $lock = add_to_queue();
eval {
open LOCK, "./lockfile" || die;
while (!$done) {
flock(LOCK, LOCK_EX);
last if check_queue($lock);
flock(LOCK, LOCK_UN);
usleep 50000;
}
print STDERR "Doing work on $lock ", scalar localtime, " \n";
# Do your work here
sleep 3;
print STDERR "Done working on $lock ", scalar localtime, " \n";
remove_from_queue($lock);
flock(LOCK, LOCK_UN);
};
if ($@) {
remove_from_queue($lock);
}
-- gam3
A picture is worth a thousand words, but takes 200K.
| [reply] [d/l] |
|
| [reply] |
Re: Locking/unlocking program execution using a queue
by reasonablekeith (Deacon) on Mar 31, 2005 at 08:59 UTC
|
If you're prepared to wait indefinitely for the lock, then your scripts should run in the correct order just with your simple model.
#!/usr/bin/perl -w
use strict;
use Fcntl ':flock'; # import LOCK_* constants
my $lockfile_path = "./lockfile";
open(LOCKFILE, ">$lockfile_path") or die "Can't open $lockfile_path: $
+!";
flock(LOCKFILE,LOCK_EX);
print "got lock @ARGV\n";
sleep(10);
print "finished sleeping @ARGV\n";
flock(LOCKFILE,LOCK_UN);
If you run multiple instances of this script like so...
./test.pl 1 &
./test.pl 2 &
./test.pl 3 &
./test.pl 4 &
...you'll see them running in the correct order. As you point out, this does break if the flock fails and you need to get back in the queue though.
| [reply] [d/l] [select] |
|
I didn't see that documented anywhere (and I have no man pages installed for flock, fcntl or lockf on the AIX machine I am working on!) so I tested it out.
By hand it seemed to work... but running it in a script of
#!/bin/ksh
testlock.pl 1 &
testlock.pl 2 &
testlock.pl 3 &
testlock.pl 4 &
testlock.pl 5 &
gave me this:
got lock 4
finished sleeping 4
got lock 3
finished sleeping 3
got lock 1
finished sleeping 1
got lock 5
finished sleeping 5
got lock 2
finished sleeping 2
So I don't think that's supported on all platforms
--------------
"But what of all those sweet words you spoke in private?"
"Oh that's just what we call pillow talk, baby, that's all."
| [reply] |
|
I had only tried it by hand, but it worked fine when I ran it as per your ksh script just now. I'm on Solaris 5.8. Guess it's not something you can rely on though.
| [reply] |
|
Re: Locking/unlocking program execution using a queue
by thekestrel (Friar) on Mar 30, 2005 at 23:55 UTC
|
Hi,
Might not be what you want as you already have the support for the lockfiles in each directory, perhaps from other tasks but....
Rather than having a lock file in each directory, just maintain a file that has the paths of the directories that allowed to be visited/have been processed. As you would be traversing the list sequentially you would not have to worry about doing things out of order.
Everytime you complete a task for a particular directory you could save the line number (which directory you did last) to another file so you could restart if the script bailed out for some reason.
An advantage of this is you don't have to skip in and out of all the directories to see if you need to do work. The list maintains where the work is to be done.
Just a thought... =)
Regards Paul. | [reply] |
Re: Locking/unlocking program execution using a queue
by Anonymous Monk on Apr 01, 2005 at 18:16 UTC
|
The real solution would be
- mknod $$.lock
- open $$.lock,"rw"
- (hard)link $$.lock nach reallock
Now two possible outcomes:
- Can link:
- (re)open reallock,">"
- (if error begin anew)
- close $$.lock
- unlink $$.lock
- You Can Work Now
- Can't link:
- open reallock,"rw"
- reopen reallock,"<"
- close reallock(rw)
- read from reallock(<)
- on eof unlink reallock
- close reallock(<)
- begin anew
| [reply] |
|
|