in reply to Gracefully exiting and restarting at a later time.

I'd do it this way:

  1. Produce a list of the files into a file using dir /b/s *.xml >yourfiles. (Or your OS equivalent.)
  2. Open the list of files with Tie::File; and process the tied array backwards.

    Open each file in turn and then delete it from the array once you've processed it:

    #! perl -slw use strict; use Tie::File; tie my( @paths ), 'Tie::File', $ARGV[ 0 ] or die $!; for my $file ( reverse 0 .. $#paths ) { ## Open and process $path[ $file ] print "Processing file: $paths[ $file ]"; ## Remove the path just processed delete $paths[ $file ] }

To interrupt the processing, just ^C it or kill it or whatever.

The next time you run the program, it will pick up from where it left off, reprocessing the file it was on when it was interrupted.

To continue the processing from a different machine, you only need to have access to the filelist file and script and you're away.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^2: Gracefully exiting and restarting at a later time.
by Largins (Acolyte) on Dec 21, 2011 at 11:53 UTC

    Okay

    This would also work, except that there is a needlessly large redundant file involved. I say redundant, because a directory is also a file, and would contain the same information. In order for this to work, the file would also have to be updated after each node was processed.

    So far, keeping the information in the database seems like the best idea (although the updated row in the database would have to be written to a file as well)

    Thanks for your thoughts.

    Largins

      except that there is a needlessly large redundant file involved.

      Hm. If you are going to keep the same list in a DB, it will also end up in a file within the filesystem. And depending upon which DB & schema you use, it will occupy anywhere from a little more to perhaps double as much space as the file.

      In order for this to work, the file would also have to be updated after each node was processed.

      You'd have to update the DB after every file to indicate the file had been processed. And that 'indication', whatever form you chose to use, is still going to end up modifiying a file on disk.

      In the end, whether you use a flat file or a "DB", the same steps have to occur -- build a list; grab them one at a time; process; check them off the list -- and the same essential disk activity must occur.

      The difference is, with a DB, you'll also get a whole raft of additional IO going on for its internal logging and journalling activity. All of which is required for its ACID compliance and/or transactional safety, but which is unnecessary overkill for such a simple -- build a list and discard each item when you've processed it -- application.

      Not to mention all the additional complexity involved in setting up, maintaining and using the DB.

      I like simple, but, each to their own :)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?