in reply to Synchronizing Multiple System Processes

eh? system only returns after the spawned process exits.

Replies are listed 'Best First'.
Re^2: Synchronizing Multiple System Processes
by diotalevi (Canon) on Apr 13, 2007 at 06:17 UTC

    I recall a problem on Windows NT where a file operation wasn't visible to the next part of the program until I'd let some time elapse. It was essentially of the same shape as this problem. I think I solved the problem by just introducing and explicit sleep for a second or few to let the file system catch up.

    ⠤⠤ ⠙⠊⠕⠞⠁⠇⠑⠧⠊

Re^2: Synchronizing Multiple System Processes
by neversaint (Deacon) on Apr 13, 2007 at 05:20 UTC
    Sorry I don't quite get your statement above.

    You mean my case above should not happen? i.e. they script2.pl by default should wait until script1.pl finish?

    in particular it gives such error message when running main:
    script2.pl : failed to open input file output.txt : No such file or di +rectory
    Please advise. Then what do you think is the cause of my symptom above?
    Hope to hear from you again.

    ---
    neversaint and everlastingly indebted.......

      by default should wait until script1.pl finish?.

      Yup. In fact, the only exception is in Windows when using the special system(1, ...); syntax.

      Of course, if script1.pl launches a child and exits, the main script won't wait for the grandchild to complete. By the way, that means system("command &"); will cause system to return "immediately". (The child, sh, launches command and exits without waiting for command to exit.) Just remove that & if you have it.

      Then what do you think is the cause of my symptom above?

      Based on the limited information at hand, all I can say with reasonable certainity is that output.txt does not exist.

        Just remove that & if you have it.
        No. I don't have &.
        all I can say with reasonable certainity is that output.txt does not exist.
        Yes it is true. But it is because script1.pl hasn't yet finished in creating output.txt

        ---
        neversaint and everlastingly indebted.......
      yes, it should not happen. but maybe this stanza from perldoc -f system can shed some light on your problem:
      Beginning with v5.6.0, Perl will attempt to flush all files opened for output before any operation that may do a fork, but this may not be supported on some platforms (see perlport). To be safe, you may need to set $| ($AUTOFLUSH in English) or call the "autoflush()" method of "IO::Handle" on any open handles.
        $| won't help. script1.pl flushed everything it had to flush on exit. (And if it didn't because of some crash, that data is gone forever. It was in the process, which longer exists. Waiting for it won't help.)
      Well, I've had this problem countless times in many languages. The problem is that while writing a new output file, that file may not(and probably does not) exist on the hard drive yet. What's more interesting is that it takes quite some time for the file to save on the drive. Waiting for a set amount of time *may* work but more often than not, I haven't seen it make a difference, no matter how long a time is waited. What you can do is combine the two processes into one. That would rectify the situation, but I have a feeling that there's a reason for not having just one process. One solution to your problem is outputting the output of process 1 to a file (if you need it) AND/OR simultaneously using expect.pm to grab runtime data from process 1 and to use it.