#! perl -slw
use strict;
use threads;
use Thread::Queue;
use Net::FTP;
my $THREADS = 10;
my $Q = new Threads::Queue;
my @threads = map async {
my $ftp = Net::FTP->new("host.com");
$ftp->login('blah','blah');
$ftp->ascii();
while( defined( $_ = $Q->dequeue ) ) {
$ftp->put( $_ );
}
$ftp->quit();
}, 1 .. $THREADS;
my @files = getFiles();
$Q->enqueue( @files );
$Q->engueue( (undef) x $THREADS );
$_->join for @threads;
Start with small values (eg.10) for $THREADS and see how things go. Adjust higher slowly.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
It might be better to put all those files into one archive, upload that one file and then extract it. You may either run a daemon/service that'll wait for the files on the server or trigger the extraction from the client using SSH, telnet, http or some other means.
Apart from the speed difference (which in my case was pretty big) this way you can make sure that the time during which some files are already updated and some are not is very short.
| [reply] |
#!/usr/bin/perl -w
use strict;
use Carp;
use File::Glob ':glob';
use Net::FTP;
use threads;
use Thread::Queue;
#use Attribute::Attempts;
my $MAX_THREADS = 10;
my $file_queue = new Thread::Queue;
my @files = bsd_glob('~/*.html');
map { $file_queue->enqueue($_) } @files;
$file_queue->enqueue(undef);
my $ftp = Net::FTP->new('host.com')
or croak "Cannot connect to host.com: $@";
$ftp->login( 'blah', 'blah' )
or croak "Cannot login ", $ftp->message;
while (1) {
for (
my $needed = $MAX_THREADS - threads->list() ;
$needed && $file_queue->peek ;
$needed--
)
{
threads->create( \&ftp_put, $file_queue, $ftp )->join();
}
sleep(1) while threads->list() > 0;
last unless $file_queue->peek;
}
$ftp->quit;
sub ftp_put #: attempts(tries => 10)
{
my ( $queue, $ftp ) = @_;
my $file = $queue->dequeue();
$ftp->put( $file )
or croak "get failed ", $ftp->message;
}
hth, PooLpi
| [reply] [d/l] |
I'm reluctant to critique, given that your script is basically a cut down and specialised version of an example by renodino, but this isn't a 'pool' solution, despite the name of the example from which it is drawn.
It is a constrained, one-thread-per-work-item solution. That is, it creates a new thread for each work item, whilst constraining the number of concurrent threads/work items to some specified limit. This is a poor substitute for a proper 'pool' solution. It is a fork solution transfered (badly) to threads.
The single biggest limitation of Perl's iThreads (compared to native threads, pthreads, greeen threads, user threads or pretty much any other flavour of threads), is the expensive, start-up costs (attributable to the attempt to replicate fork-like behaviour), that of cloning the state of the spawning thread.
The whole point of the 'thread pool', is to start a pre-determined number of long lived threads (the pool) and re-use them over again to process multiple work items until the task is complete.
One day, maybe, someone will implement 'no clone' or 'bare thread' edition of threads, and then we'll see the true benefit of threads. So far, most all the additions to threads since it's dual life-ing serve only to perpetuate the threads-as-a-substitute-for-forks malfeasance. What a waste.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |