Personally, I like the idea of using the website to update a control table in the database, which the perl script consults to determine its behavior.
| [reply] |
| [reply] |
| [reply] |
Also, In addition to the other suggested solutions, classic UNIX dogma includes signals. You could add a signal handler to your code. That re-reads and applies the configuration info stored in db. The hangup signal is, by tradition, usually chosen for reloading configs.
| [reply] |
G'day Amblikai,
You might consider using Named Pipes for this task.
The basic idea would be to have your SQL (background) script set up the named pipe.
It would check if there were any directives (i.e. specific steps to be executed) to be read from the pipe.
If so, it would read, and then perform, these in whatever order they arrived (FIFO).
If not (or when all piped directives had been exhausted),
it would go back to its default list of steps.
Your WEB script would be writing these directives to the named pipe;
I assume on a somewhat ad hoc basis.
The directives, themselves, could be step numbers or step names;
or, potentially, something more complex like commands with options and arguments.
I've put together a couple of test scripts to give you an idea of the techniques involved.
There's no DBI or CGI code here; really just ideas for you to modify and adapt
to whatever code you currently have.
Also, in total, they're quite lengthy, so I've put each in spoiler tags.
First, pm_1133400_sim_sql.pl, which is intended to simulate your SQL (background) script.
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use constant SQL_STEP_INTERVAL => 1;
use IO::Select;
use POSIX qw{mkfifo};
use PM_1133400_Shared qw{:SQL};
sub _handle_death_signal {
my $sig = shift;
remove_fifo();
die "\nSQL_SIM: Trapped and handled SIG$sig signal. Exiting.\n";
}
BEGIN {
my @death_signals = qw{INT HUP};
@SIG{@death_signals} = (\&_handle_death_signal) x @death_signals;
remove_fifo();
mkfifo FIFO, 0600;
}
END {
remove_fifo();
}
my @steps = @{get_steps()};
my $step = $#steps;
{
local $| = 1;
if (fork) {
open my $keep_alive_pipe_writer, '>', FIFO;
wait;
close $keep_alive_pipe_writer;
exit;
}
open my $pipe_reader, '<', FIFO;
my $select = IO::Select::->new();
$select->add($pipe_reader);
while (1) {
while ($select->can_read(0)) {
$step = <$pipe_reader>;
chomp $step;
print 'SQL_SIM: Executing web request: ';
$steps[$step]->();
}
print 'SQL_SIM: Executing normal step: ';
$step = ++$step % @steps;
$steps[$step]->();
sleep SQL_STEP_INTERVAL;
}
close $pipe_reader;
}
Next, pm_1133400_sim_web.pl, simulates your WEB script.
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use constant {
TESTS => 5,
WAIT_RAND => 3,
};
use PM_1133400_Shared qw{:WEB};
my $total_steps = get_total_steps();
my @queue;
{
local $| = 1;
for (1 .. TESTS) {
sleep int rand WAIT_RAND;
my $rand_step = int rand $total_steps;
if (-p FIFO) {
open my $pipe_writer, '>', FIFO;
if (@queue) {
for (@queue) {
print "WEB_SIM: FIFO Available! Piping from queue
+[$_]\n";
print $pipe_writer "$_\n";
}
@queue = ();
}
print "WEB_SIM: FIFO Available! Piping new step [$rand_ste
+p]\n";
print $pipe_writer "$rand_step\n";
close $pipe_writer;
}
else {
print "WEB_SIM: FIFO Unavailable! Queueing step [$rand_ste
+p].\n";
push @queue, $rand_step;
}
}
}
You'll see that both scripts use a PM_1133400_Shared module.
That was a convenience for me; you may want something entirely different (e.g. a config file).
However, you will, at the very least, need some mechanism that allows both scripts
to know the filename of the named pipe (and I recommend you store that information once).
Here's PM_1133400_Shared.pm:
package PM_1133400_Shared;
use strict;
use warnings;
use autodie;
use constant FIFO => 'pm_1133400_fifo';
use Exporter qw{import};
our @EXPORT_OK = qw{FIFO get_steps get_total_steps remove_fifo};
our %EXPORT_TAGS = (
SQL => [qw{FIFO get_steps remove_fifo}],
WEB => [qw{FIFO get_total_steps}],
);
{
my @steps = (
sub { print "Step 0\n" },
sub { print "Step 1\n" },
sub { print "Step 2\n" },
sub { print "Step 3\n" },
sub { print "Step 4\n" },
sub { print "Step 5\n" },
sub { print "Step 6\n" },
sub { print "Step 7\n" },
sub { print "Step 8\n" },
sub {
print "Step 9\n";
remove_fifo();
print "Done!\n";
exit;
},
);
sub get_steps { [@steps] }
sub get_total_steps { scalar @steps }
}
sub remove_fifo { unlink FIFO if -e FIFO }
1;
Now, if you save those three files in one directory, and make the scripts executable,
you can test it like this
(to run the SQL script in the background and the WEB script in the foreground):
pm_1133400_sim_sql.pl & pm_1133400_sim_web.pl
The WEB script operates randomly.
In some cases, it might start issuing directives before the named pipe is created,
so it will put those in a queue and write them to the pipe later:
WEB_SIM: FIFO Unavailable! Queueing step [5].
SQL_SIM: Executing normal step: Step 0
SQL_SIM: Executing normal step: Step 1
WEB_SIM: FIFO Available! Piping from queue [5]
...
SQL_SIM: Executing web request: Step 5
Other times, the SQL script may have performed some of its normal steps before any
WEB script directive is issued:
SQL_SIM: Executing normal step: Step 0
SQL_SIM: Executing normal step: Step 1
WEB_SIM: FIFO Available! Piping new step [4]
SQL_SIM: Executing web request: Step 4
SQL_SIM: Executing normal step: Step 5
SQL_SIM: Executing normal step: Step 6
...
And, as you can see from that last sample extract,
WEB can cause the steps to be performed out-of-order ([0, 1, 4, ...]) but,
when those are completed, SQL will continue on in-order ([..., 4, 5, 6]).
I've only used builtin functions, modules and pragmata;
you can find documentation for everything via perldoc.
Of course, if you have other questions, feel free to ask.
| [reply] [d/l] [select] |
A fairly common technique is to use an SQL database to contain a job-queue, as well as step-control, time-stamping and status logging related to each job. As the workers complete a step, they (in an SQL transaction ...) update the status table for the job that they are working on and perhaps alter their flow in some way. For user status-reporting, a simple polling loop will usually suffice (the reads also being done within a transaction so that the data being read is atomically correct).
There are a number of batch job scheduling frameworks available in CPAN for Perl.
| |
| [reply] |
| [reply] |