This is to evenly distribute email messages from customer_service@fsdf.com to salesperson1, 2, 3 etc.
Looks to me like someone may have made a couple of (bad?) simplifying assumptions:
- That there will always be exactly 3 salespeople available to service requests
- That all customer service messages are equivalent in terms of complexity
- That each salesperson services equivalent requests at the same rate
The approach you're taking hard-codes these assumptions.
Wouldn't it be better to provide a queue that the salespeople could take the next request from? That would let the salespeople load balance amongst themselves, and would still provide management metrics (e.g., the rate at which different salespeople absorb requests).
| [reply] |
Why use cron? This seems like something that would be handled by youre friendly neighborhood .forward file.
Either way you go, you could use a plain text file or one of perl's many persistent data modules to store which salesman should get the next e-mail.
so the steps would be something like:
- read salesman's directory number from disk
- route STDIN to a file
- increment the directory number and write to disk
Actually, now that I look at it, I've done something quite similar to this...
akira:~$ cat .forward
|myproc.pl
akira:~$ cat myproc.pl
#! /usr/bin/perl -w
$fh = die "read that node more carefully!" # your to-do. load it from
+a file.
open FH, ">>$fh" || die $!;
while (<STDIN>){
print FH $_;
}
close FH;
# now alter whatever's in $fh, and save it to disk.
Make sense? | [reply] [d/l] |
I found that whenever I'm not able to clearly lay out a problem I'm trying to tackle in writing, this is a sure sign of a serious misconception of the whole issue on my part. I command your effort to explain your specific problem an extra time. ;-)
So, paraphrasing your problem specification, I get this:
1. Rotate files from dir_1 to dir_2 every 5 minutes.
2. Rotate files from dir_2 to dir_3 every 5 minutes.
3. Rotate files from dir_3 to dir/main every 5 minutes.
4. and so on.
A way to solve the problem might be to write a simple Perl daemon to do just that. Let it sleep for 5 minutes after which it'll go about rotating the files. The way it would track when a file had been last moved (and thus when it's next move is to happen) is by examining it's creation time stamp (as this changes whenever a file is moved/created, or am I wrong?).
Let me think some more.. and hopefully I can get a script to you (unless another monk beats me to it ;-)
Here's basic algorithm, though:
1. Study files in all directories.
2. Order files by their creation time.
3. Calculate next earliest time when a bunch of files has to be rotated.
4. Sleep...
5. (wake up), rotate files.. and go to step 1.
UPDATE: Here's a very dirty attempt at the code you might be looking at:
use strict;
use File::Find;
# plug your own value here
$rotate_dir = '/share/files/rotate';
$move_timeout = 5*60; # 5 minutes.
my %files;
while (1) {
# find files that have to be rotated
my @rotate;
find(\&wanted, $rotate_dir);
rotate();
}
sub wanted {
if (exists $files{$File::File::name}) {
# alright, we saw it before.. so, has it's
# time come to move?
push @rotate, $File::File::name
if ($files{$File::File::name} < time - $move_timeout);
}
$files{$File::File::name} = time;
}
sub rotate {
# "I know, I don't have to use global vars, I KNOW!" ;)
for (@rotate) {
my $new_path;
if (m|/main/|) { # initial spot
$new_path = $_;
# drop it to the first sale rep's directory
$new_path =~ s|/main/|/dir_1/|;
} else {
# retrieve sales person's id
my ($sales_id) = ($_ =~ m|(\d+)/$|);
my $new_id = $sales_id + 1;
$new_dir =~ s|$sales_id/|$new_id/|;
# drop to main directory if next sale's rep
# id isn't valid (exists)
$new_dir = 'dir/main'
unless (-d $new_dir);
}
# move to that new directory...
}
Things left to do (other than actually 'finish' the code) is to also make sure your daemon won't chew up all your system resources while doing it's work. For that, I'd suggest making it a 'nice' process using the set_priority() method with a higher value. Also, please be forewarned that I didn't even have the chance to compile the code (actually, I'm in a rush now to do other work.. err.. 'at my real work' ;)
NOTE: I also assumed that 5 minute 'time limit' applies to each file individually. Therefore, inside my code I gather statistics on each file.
_____________________
$"=q;grep;;$,=q"grep";for(`find . -name ".saves*~"`){s;$/;;;/(.*-(\d+)
+-.*)$/;
$_=["ps -e -o pid | "," $2 | "," -v "," "];`@$_`?{print"+ $1"}:{print"
+- $1"}&&`rm $1`;
print$\;}
| [reply] [d/l] [select] |
So, paraphrasing your problem specification, I get this:
1. Rotate files from dir_1 to dir_2 every 5 minutes.
2. Rotate files from dir_2 to dir_3 every 5 minutes.
3. Rotate files from dir_3 to dir/main every 5 minutes.
4. and so on.
i don't think this is the problem statement. i think it's more like:
1. file arrives in main directory.
2. move file to one of three directories, alternate between dir_1, dir_2, dir_3
i could be wrong, of course.
~Particle *accelerates*
| [reply] |
Close...It's tough for me to explain and even tougher for
me to figure it out! It's more like this...
1. Rotate files from dir/main to dir_1 next 5 minutes.
2. Rotate files from dir/main to dir_2 next 5 minutes.
3. Rotate files from dir/main to dir_3 next 5 minutes.
4. start at 1. again (loop)
| [reply] [d/l] |
i'll give you some non-tested code:
use File::Copy;
## set up directories
my $in_dir = '/dir/main';
my @xfer_dirs = qw! /dir/dir_1 /dir/dir_2 /dir/dir_3 !;
## list of files in $in_dir
my @infiles;
## pick a random dir to start
## so salesperson 1 doesn't get more mail when the job restarts
my $xfer_dir_index = int rand @xfer_dirs;
{
## create a directory handle
local *DH;
## open the directory
opendir( DH, $in_dir ) or die "ERROR: $0 - can't open dir $in_dir -
+$!";
## get a list of files
@infiles = grep { /^\./ && -f "$in_dir/$_" } readdir(DIR);
## close the directory
closedir DH;
}
for @infiles
{
## move the file
move(
$in_dir . '/' . $_,
$xfer_dirs[$xfer_dirs_index] . '/' . $_
);
## cycle the directory index
$xfer_dir_index++
$xfer_dir_index %= @xfer_dirs;
}
## sleep
sleep(300);
~Particle *accelerates* | [reply] [d/l] |
while(1){
`mv /dir/main/* /dir_1`
sleep(300);
`mv /dir/main/* /dir_2`
sleep(300);
`mv /dir/main/* /dir_3`
sleep(300);
}
| [reply] [d/l] |
I think i see your problem... You have a main directory receiving emails, every 5 minutes it dumps one email into 1 of three directories(sequentially counting, and restarting when after an email has been put in the last directory). Now, do you only expect 1 email per 5 minutes, or many? if many, then do you want it to dump all of the emails to the rotating directory? or perhaps just to split the emails evenly between the x directories?
how about...
@dirs = ("/dir_1","/dir_2","/dir_3");
while() {
sleep{300);
mv /dir/main/ $dir[$i++%3];
}
| [reply] [d/l] |