in reply to How do I prevent more than one request (to some URL) per second with CGI?

Not sure if I understand what you're trying to accomplish here... Is there a concern that the script can be started by multiple users simultaneously? Or do you simply need a delay between successive requests made by the same process?
If latter is the case, you have your answer already - sleep().

If not, and you need logic to gate requests from multiple instances of the same script, you can use a lock file.
use Fcntl ':flock'; my $file = 'file.lck'; open my $lfh, ">>$file" or die "Can't access $file : $!\n"; if (flock $lfh, LOCK_EX){ sleep 1 while (time - (stat $file)[9] < 1); # # Send your Amazon requests here # utime time, time, $file; }else{ print "Can't lock $file: $!"; } close $lfh;
The code is untested and may not work if you simply cut and paste it. It should, however, give you an idea of how to approach the problem.

--perlplexer
  • Comment on Re: How do I prevent more than one request (to some URL) per second with CGI?
  • Download Code

Replies are listed 'Best First'.
Re: Re: How do I prevent more than one request (to some URL) per second with CGI?
by Balandar (Acolyte) on Nov 27, 2002 at 21:07 UTC
    That's what I was looking for. I didn't even consider locking a file. Thanks for the help perlplexer.
      With that in mind, it may be worth your looking at merlyn's Highlander - allow only one invocation at a time of an expensive CGI script which allows only a single simultaneous invocation of a CGI script. If you combine this with a sleep function within your script such that execution time is greater than a second, this will effectively achieve your goal.

       

      perl -e 'print+unpack("N",pack("B32","00000000000000000000000111101100")),"\n"'