in reply to Re^3: making a loop script with a remote URL call faster
in thread making a loop script with a remote URL call faster

I just only realized that the OP mixes timed and non-timed code. This only works if jitter is not considered a major problem.

If the timing needs to be quite exact, my usual approach is to run two different process and use some form of interprocess communication. One process fetches the data from the remote server (however long that takes), parses it and then sends the relevant information to the other process that does cyclic processing.

There are many ways to do this, depending on the data size and the operating system. TCP or UDP work OK, Unix Domain Sockets are quite a bit faster though. There are also pipes and stuff.

Often it's quite a lot easier to use an existing messaging solution, though. Again, there are many solutions, depending on your requirements. Personally, i of course will shamelessly plug my own, which is Net::Clacks. There are some examples that should you get started. If not, i'm a regular here. And there is even a small (slightly out-of-date) mini-tutorial here on PM: Interprocess messaging with Net::Clacks. The Upgrade Guide inluded in more recent versions of Net::Clacks also contains quite a lot of information not documented otherwise.

perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'

Replies are listed 'Best First'.
Re^5: making a loop script with a remote URL call faster
by haukex (Archbishop) on Jan 18, 2022 at 13:41 UTC
Re^5: making a loop script with a remote URL call faster
by LanX (Saint) on Jan 16, 2022 at 23:43 UTC
    I have problems following, and it really depends what the OP really wants (this has a strong stench of an XY problem)

    For one possible interpretation:

    use Time::HiRes (sleep time); my $next_time = time(); while (1) { next if time() < $next_time; $next_time += 60; # no accumulated lag my $price = fetch(); do_it($price); my $took = time()- ($next_time-60); handle_edge_case() if $took > 60; sleep 59 - $took; }

    (totally untested, there are most probably dragons...)

    The idea is that you only skip at max 59 seconds (or even a bit more) with sleep to let the loop catch the "exact time".

    It still needs to handle the edge-case that fetch() and do_it() $took longer than 60 secs tho.

    But how exactly really depends on the problem to solve...

    So what am I missing justifying two communicating processes???

    edit

    The logic might be clearer if I calculated $remain = $next_time - time(); instead of $took ... left as task for the interested reader :)

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      Well, when someone wants to make a webcall like every 60 seconds and fusses about a two second delay that it takes to make that call, i naturally assumed this was a timing critical thing. That's why i posted that it might be useful to put the timing critical code into a separate process and not mix it with other code that could introduce "random" delays.

      While interprocess communication on a single computer has it's own delays and caveats, these delays are usually much shorter and more predictable that calls to external services. For my example code, i assumed that OP wanted to start the external call at a specific interval. There is no way to predict how long that takes and when the server is seeing it (lost packets in handshake and so forth), but that is another matter entirely.

      Another thing to consider is that the code outside the webcall also gets delayed while the webcall is in progress. The $pricing thing let's me to believe this is some kind of trading bot thing, which i assume wants to check for the new price at the start of every minute, just when the trading plattform releases it.

      Splitting the delay from the webcall into a separate process would allow the rest of the bot to run at full speed the whole time, except for the minimal delay it takes to regularly check if a new price arrived through a Unix Domain Socket or pipe.

      I'm probably overthinking this, but hey, that's what i do best 8-)

      perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'
        I'm confident my code runs exactly every 60 secs.

        But yes, as it is now, it can't raise an exception if the fetch takes longer than 60 secs. This would require using alarm

        But no matter which interpretation or implementation this edge case needs to be defined and handled.

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        Wikisyntax for the Monastery