Faile has asked for the wisdom of the Perl Monks concerning the following question:
Hi,
Is there a way to regulate how sockets are used by Net::FTP?
To me it sounds like there could be a way to "reuse" sockets since the transfers do not happen in parallell but sequentially.
Situation; I have a dropbox server from which I copy the incoming files to an internal server using a perl script. The script also archives the files and makes a copy to a dev environment on demand and writes these transfers to a database for tracking.
The problem I'm facing is that given hundreds of files every 15 minutes means that the script opens up a truckload of sockets ( nearly 1500 simultaneous open sockets ) for all the put commands.
Here are the FTP specific commands used in the script.
This part is exectuted only once:
$ftp2=Net::FTP->new($host2,Port=>$port,Timeout=>60) or die "Cannot con +nect2 to $host2: $@"; $ftp2->login($user2,$pass2) or die "ftp2: Cannot login ", $ftp->messag +e; $ftp2->binary or die "Couldn't change mode2 to binary!\n";
This code is executed for every file encountered that should be moved, finding the files to be moved is done using File::Find. In the "wanted" loop I call on the archival of the file and then the transfer:
$ftp2->put($file) or my $problem2 = 1;
If there's no way to limit the sockets easily, I would appreciate if you could give me pointers to better ways to do this. I used File::Copy earlier to copy the files over SMB to UNC paths but found the solution to be rather unstable, causing intermittent errors due to unknown reasons ( probably network related ).
Thanks in advance
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: limiting the amount of sockets opened by f
by ikegami (Patriarch) on Feb 10, 2012 at 08:09 UTC | |
by Faile (Novice) on Feb 14, 2012 at 08:45 UTC | |
by ikegami (Patriarch) on Feb 14, 2012 at 20:24 UTC | |
|
Re: limiting the amount of sockets opened by f
by rovf (Priest) on Feb 10, 2012 at 09:27 UTC | |
by Faile (Novice) on Feb 10, 2012 at 20:35 UTC |