in reply to Leap second coming up. Check your date handling code

Note that Google (and others) solves the problem by "smearing" the leap second:

Business Insider Article

($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,

Replies are listed 'Best First'.
Re^2: Leap second coming up. Check your date handling code
by 1nickt (Canon) on Dec 27, 2016 at 11:49 UTC

    I read about that(1), and I hate it!

    As linked to by pme, the official statement from the INTERNATIONAL EARTH ROTATION AND REFERENCE SYSTEMS SERVICE (IERS) reads:

    A positive leap second will be introduced at the end of December 2016. The sequence of dates of the UTC second markers will be: 2016 December 31, 23h 59m 59s 2016 December 31, 23h 59m 60s 2017 January 1, 0h 0m 0s

    In spite of "American exceptionalism" (cough, cough), neither Google, nor Akamai, nor Microsoft, nor Bloomberg, owns science. The international scientific community decided in 1972 that the correct thing to do was to add a second periodically, as is done with the leap day quadrennially. Google's and other companies' decision to employ "smearing" because it's more convenient for them undercuts scientific authority (last thing we need these days!), privatizes reality, and not least, makes their clocks unreliable for 20 hours or more.

    Support science! Say "no" to Google's Not Time Protocol! Stick to pool.ntp.org -- it's open source, community-based, used by many millions of servers, accurate, and written in Perl :-)


    1 (Although not in "Business Insider," which must be owed something by Google, since its articles dominate the Google News selections, even though behind a pay wall if one uses an ad blocker ...)


    The way forward always starts with a minimal test.

      Google's post on this topic is here, but they're not pressuring anyone to use their smeared time -- they are just making the public aware of how they're handling the leap second by 'smearing' the additional second over a 20 hour window centered on midnight.

      They've also posted about how other organizations are doing their smear: UTC-SLS is using a one thousand second smear before the leap, Bloomberg is using a two thousand second smear after the leap, and Akamai, Microsoft and Akamai are doing a 24-hour smear.

      I believe for most consumers of ntp data this smearing is mostly of academic interest. Use whichever time feed is appropriate to you.

      Alex / talexb / Toronto

      Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.

Re^2: Leap second coming up. Check your date handling code
by LanX (Saint) on Dec 27, 2016 at 12:25 UTC
    So instead of one leap second they do 72000 times a 13.8 leap microseconds?

    Is this really safe?

    Cheers Rolf
    (addicted to the Perl Programming Language and ☆☆☆☆ :)
    Je suis Charlie!

      It all depends on what you're using "time" for and what you're using as your ground truth.

      As far as I understand, Google uses synchronized time for lots stuff like of query ordering and vector clocks etc and it is important for them to have one ground truth up to the point that they install their own GPS clocks in datacenters. As these are mostly for the use by Google, it's up to them to decide how they will handle the additional second, and I can understand from a risk assessment point of view that it's likely less risk to have slightly longer seconds instead of having one additional second with the number 60 and auditing your code for the parts where that becomes relevant.

      The situation becomes interesting when the private use/decision of Google leaks out into the real world for (say) Google Cloud Engine users or whoever else relies on Google infrastructure and timekeeping.

      Personally I can't imagine situations where the exact and synchronized duration of a second is important to you but you don't have your own synchronized clock(s), but you'll have to be prepared for an apparent one second gap when comparing the timestamps of Google infrastructure with the timestamps of your own infrastructure, and over time, the two kinds of timestamps will diverge until at 2017-01-01 00:00:00Z where they will suddenly converge again.

      If this is your first time dealing with diverging clocks, it will be an interesting learning experience, especially if you did to GPS-based time exactly to avoid this situation.

        In the aftermath of this leap second, Cloudflare experienced an outage and blogged about it. It seems the root cause was code that expected a monotonically ascending value for seconds, but the additional second was handled (by the Go library used) by letting time go backwards one second, which led to negative durations for some events, which finally were not handled gracefully.

        I think this would not have been a problem for Cloudflare if they too had stretched the duration of a second, at least for their machines running RRDNS. Of course, this is literally Monday quarterbacking as I wasn't part of the decision process there. Also, knowing and understanding how time and durations are used within your code is not an easy thing if you don't explicitly analyze your code for the usage of both.

      I don't know if it's relevant, but it occurred to me that RTC crystals used in embedded systems are tuned by manufacturers to operate at the very specific frequency of 32.768kHz. If I recall the original reasoning was that the MSB of a 16-bit counter value toggles every 1 sec for that frequency, and now I'm guessing it's just kind of legacy that it continues to be done that way instead of going to some faster multiple of that value (which would mean physically smaller crystals, less material, board space, etc.). Kind of a moot point since modern technology tends to move away from traditional RTC crystals anyway, but I digress.

      I mention the above because the period for the 32.768kHz frequency is about 30.5 microseconds, so the choice of 20 hours for the smear could have been as arbitrary as the number of hours necessary to reach less than half that value? Technically 19 hours would have been enough, but who likes odd numbers? Besides, us silly engineers have this always add in a margin of error habit even in the digital/discrete realms where we know darn well it doesn't matter. Granted, I'm completely speculating on the reason for the 20 hour smear, I could be totally wrong, I have no inside knowledge or anything like that.

      Just another Perl hooker - Yes, I'll do really dirty code, but I charge extra.