in reply to How I unconditionally wait for data in 7 seconds?

Try this:

#! perl -slw use strict; use Time::HiRes qw[ time ]; binmode STDIN; my $buf = ''; while( 1 ) { sysread( STDIN, $buf, 1 ); my $end = time() + 7; sleep 0.1 while time() < $end; sysread( STDIN, $buf, 1024, length( $buf ) ); printf "Got: '%f'\n", unpack 'd', $buf; }

The idea is that instead of trying to interrupt or timeout a read after 7 seconds; you read 1 byte; blocking until it arrives; you then wait 7 seconds and then read the rest of whatever has arrived in the interim.

When combined with this:

#! perl -sw use strict; use Time::HiRes qw[ time ]; binmode STDOUT; while( 1 ) { my $packet = pack 'd', time; syswrite( STDOUT, $packet ); sleep 30; }

It produces this:

C:\test>sender | listener Got: '1423634772.508475' Got: '1423634802.506555' Got: '1423634832.504635' Terminating on signal SIGINT(2) Terminating on signal SIGINT(2)

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

Replies are listed 'Best First'.
Re^2: How I unconditionally wait for data in 7 seconds?
by sebastiannielsen2 (Novice) on Feb 11, 2015 at 06:46 UTC
    What is the Point of executing "sleep 0.1" about 70 times, instead of just executing "sleep 7"?
      What is the Point of executing "sleep 0.1" about 70 times, instead of just executing "sleep 7"?

      Resolution. sleep isn't guaranteed to end after exactly the specified time.

      For example, your thread or process could be sitting in the run queue awaiting a cpu to come free when the actual sleep time runs out; so by the time control is returned to your thread, a few 10s, or 100s of microseconds have elapsed beyond the specified time. If you consistently sleep for 7 seconds, and the sleep consistently takes 7.001 seconds before returning control, your time-slots progressively slip a little more each time.

      By looping over a smaller sleep and comparing against real-time; you may be a fraction out each time, but those fractions don't accumulate.

      And you can tailor the sleep time used to balance cpu usage against accuracy.

      Maybe you don't need the accuracy. But whenever you are attempting to time synchronise between systems; you should allow for discrepancies like these.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

        aaah now I understand. It wont matter with accuracy. The time slot only needs to be larger than it takes for the device to send one data block (I Think about 2 seconds), and smaller than the interval between two data blocks (device is set to send one data block each 30 sec). Its just a way to know "ooh, now the device is finished talking so lets parse the contents of the data block".

        Since its Waits until one character is available, and then waits 7 seconds for the data block, the code will resynchronize itself with the device all the time, even if the device waits 29 seconds or 31 seconds to send a data block instead of 30 seconds, thus it will Always be 7.X seconds from the device starts talking, thus in reality it will not "slip" over real time.

        Your sleep code is extremely fragile against DST Changes. Imagine what happens when it comes a DST -1h change (from summer time to Winter time) during your while loop. Whoops, the code will instead wait 1 hour and 7 seconds.

Re^2: How I unconditionally wait for data in 7 seconds?
by Anonymous Monk on Feb 11, 2015 at 17:07 UTC
    The code did not work. Tried it now and it requires a newline for it to accept the input. It wont even detect the first char until I send a newline.
      Your sender might not be sending until its got a newline to send.

      Only if you didn't use the code I posted.

      So, post your short, runnable example that demonstrates the problem and I'll point out what you did wrong.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
        Found out the problem. It was a prolematic config in the xinetd causing it to buffer Everything until newline. I took a default config from a Another Linux distribution and now it works perfectly.