philc has asked for the wisdom of the Perl Monks concerning the following question:

Revered Monks...

Using ActiveState Perl 5.8.7.815 on Windows XP.

I've created a application that takes input from a spreadsheet and then moves PDFs from one directory to another on a networked server. I'm using File::Copy to move the PDFs. The spreadsheet contains about 2000 entries and is parsed into an array. The PDFs range in size between a 20kb to 25mb.

The key code snippet goes as:

#------------------- chdir $SearchDirectory; foreach my $file (@data){ if(-e $file){ move($file, $DestinationDirectory } } #--------------------

Here's the question...the script iterates through the array faster than Windows actually moves the files...that is, there is a delay between when the item is told to move the item and when the item shows in the $DestinationDirectory.

Is this problematic/dangerous? If I were to simply select and move the files manually through Windows the files would not move as quickly as the script does. Should I add a wait cycle or some form of confirmation inside the loop to ensure that the file is moved before the loop iterates? Obviously this would slow things down but that would ultimately be the point.

Cheers
phil

Replies are listed 'Best First'.
Re: file::move and networks
by roboticus (Chancellor) on Jul 15, 2007 at 21:06 UTC

    philc:

    If you look at it using a command window instead of an explorer window, you'll see that they're copied quickly. The explorer window doesn't get immediate updates when the directory actually changes. Instead there's an arbitrary delay before it "notices". So the explorer window is the one that's slow, not the move.

    ...roboticus

Re: file::move and networks
by moritz (Cardinal) on Jul 15, 2007 at 21:46 UTC
    This is an issue your operating system has to care about, and it usally does take care.

    If you are moving the files inside of one single partition, the data is not moved at all, just a bit of metadata, which is blazingly fast.

    The only thing you might have to worry about are network file systems over a lagging connection.

Re: file::move and networks
by aquarium (Curate) on Jul 16, 2007 at 05:44 UTC
    since move can fail...and windows OS doesn't handle that very well. it's better to "copy", then check that all files got there (file sizes), and then delete originals OR move them to a local backup location. a failed move on windows OS can leave you with half of the directory on each server, with a couple of corrupt files as well.
    the hardest line to type correctly is: stty erase ^H
      since move can fail...and windows OS doesn't handle that very well.
      1. It's the unix move semantics, silently overwriting a protected file, that suck.
      2. In you look inside File::Copy at the move()/mv() routine, it uses a 'delete if necessary and copy' process on windows anyway.

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        It's the unix move semantics, silently overwriting a protected file, that suck.

        Un*x does not do that, you need 'mv -f file1 file2' to force the mv on existing file...o maybe I did not get your comment. Now if a command tries to enforce or not (vi does e.g) the w bit on a file is another problem.

        cheers --stephan