A quick look at the source tells me, that the most propable difference is the size used for reading and writing. File::Copy uses a buffer size of 2*1024*1024 bytes, while the programs written in C will most likely allocate a statically fixed buffer, whose size is not known.
As it seems, if your numbers and testing are correct, your harddisk/network stack/NFS combination handles large blocks better than small blocks (of, say, 64k).
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The $d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider ($c = $d->accept())->get_request(); $c->send_response( new #in the HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
In reply to Re: Curious Observation with File::Copy;
by Corion
in thread Curious Observation with File::Copy;
by nimdokk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |