in reply to Synchronisation between multiple scripts

Create a daemon to handle the connections and transfers, that opens a local socket and accepts the paths of files to be transferred.

Have it copy or move the files into a private directory when the transfer request is made and delete once it is complete. If the deamon dies, it know what needs to be done when it re-starts.

It can either maintain an open connection to the remote host, or only establish the connection when there are transfers to be done.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re: Synchronisation between multiple scripts

Replies are listed 'Best First'.
Re^2: Synchronisation between multiple scripts
by weismat (Friar) on Jan 16, 2009 at 10:52 UTC
    I was also thinking about this approach, but I was a bit reluctant to implement it because it was a lot more work than using the lock file.
    If the transfer becomes a bottleneck, I will think about implementing it again.
    At the moment I am busy reducing my watch cycles for the successful transfers, but if I think it is too slow, then I will go for this approach.
      I would go for BrowserUK's approach, and I fail to see how it's going to be a lot more work. In fact, it's probably going to be a lot less work. With a separate program doing the FTP transfers, you only have one place where you have to worry about failed transfers, instead of having to deal with it in all your programs.

      Besides, it follows the Unix toolkit approach: separate things are done by separate programs - each program tuned to do its task very well.