in reply to Delayed-write object/module

can your wrapper just be something like this? i.e. does it really need to figure out the final directory during or can it wait until after?
#!/bin/bash out=/tmp/wrapper.$$ perl some_script.pl > $out 2>&1 # run your script, trapping everythin +g dest=`perl -x $0 $out` # examine logfile, figure out right place mkdir -p ${dest%/*} # make sure the directory exists mv $out $dest # move the log into place exit ############################### #!/usr/bin/perl use strict; use warnings; my $filename = $ARGV[0]; my $output; # read in $file, figure out where it should go; set $output print $output;
(note that the perl code could also use File::Copy's move() as well; and also note that the bash script could check the exit code of the perl call;)

Update: Hmm.. re-reading OP, i see "some file on the shared disk that everything else is put into" .. So i guess the mv should be an append .. and i guess my main question gets re-phrased as "do you need to append as you go (interlacing w/other logs) or can it append as one big chunk at the end?

Replies are listed 'Best First'.
Re^2: Delayed-write object/module
by Tanktalus (Canon) on Feb 14, 2006 at 04:28 UTC

    I think you read it correctly the first time. We put everything into /nfsdisk/run-specific/path/here. The logs get put into /nfsdisk/run-specific/path/here/Logs. There are actually multiple logs. I'm just planning to put the stdout and stderr into /nfsdisk/run-specific/path/here/Logs/script-name.out.

    My backup plan was to do something mostly similar to what you have - just update the perl code to print "Logs directory: /nfsdisk/run-specific/path/here" somewhere in its output, grep, cut, and append the outfile name. Then cp or mv it over. However, I thought it'd be really neat if I could do away with some of my intermediate files because then I could actually watch my code's progress as it is output onto the NFS disk from another machine while it is running under someone else's account (I don't have permission to write to the disk - so someone else runs it for me).

    This would also be somewhat handy if I could kick it off for some other tracing and logging. Right now, I need to execute probably an equivalent of about 10,000 lines of code or so before I can turn on tracing or logging. I obviously can't trace that execution. If I could have the tracing and logging write through IO::File::Delayed objects, then when I set the log path (which is where my tracing goes as well, if it's on), all the log and trace entries thus far would be immediately written out, and I'd have the full stack of what is going on. As it is, I can't trace anything until I've parsed enough config files and command line params to definitively locate where the trace goes, then I can continue parsing config files and command line params with trace enabled.

    Now, before someone gasps at "10,000 lines" - think of this: put your configuration into XML files, and use Getopt::Long for your command line parsing, and platform detection to figure out which XML files to use, and how to interpret them. To read, parse, and decode all that, you'll run through thousands of lines of code - just not all your own. And if you count going through the same line of code multiple times as multiple lines of code being executed, well, 10,000 shouldn't seem unreasonable. Just unorthodox way of counting lines. I'm talking about "executed" lines, not "unique" lines.