in reply to POSIX::mktime vs. Time::Local - Revisited.

According to Benchmark, POSIX::mktime is about 700% faster.

Benchmark: timing 20000 iterations of POSIX, timelocal... POSIX: 1 wallclock secs ( 0.57 usr + 0.03 sys = 0.60 CPU) @ 33 +333.33/s (n=20000) timelocal: 6 wallclock secs ( 4.95 usr + 0.00 sys = 4.95 CPU) @ 40 +40.40/s (n=20000) Rate timelocal POSIX timelocal 4040/s -- -88% POSIX 33333/s 725% --

Being written in Perl, and not C, Time::Local should work everywhere Perl does. And looking at the source, I don't see anything that may fail as long as Perl is working properly.

According to the docs of Time::Local

Please note, however, that the range of dates that can be actually be handled depends on the size of an integer (time_t) on a given platform. Currently, this is 32 bits for most systems, yielding an approximate range from Dec 1901 to Jan 2038.
And as POSIX::mktime() returns undef when I try to give it a date past 2038, I assume it fails for the same reason.

Replies are listed 'Best First'.
Re^2: POSIX::mktime vs. Time::Local - Revisited. (perspective)
by tye (Sage) on Jan 20, 2003 at 18:01 UTC

    So the one that works better and on more platforms takes about 1/4600th of a second longer to run on your platform. That makes the choice pretty clear to me for most cases.

    Perhaps if I were parsing huge log files where I had to convert a formatted date into an epoch date for each line, the run-time difference of about 1 second per 4600 lines would be enough to get me to use the faster but worse version.

                    - tye (enjoying one small piece of fairy cake)

      An additional argument for Time::Local, is that POSIX is huge.

      However, to contrast this line of thinking:

      The converse function to timelocal(), localtime(), does not accept a granularity smaller than second units.

      Specifying seconds as a floating point number invites rounding errors when performing any calculations. IEEE 64-bit floating point numbers offer 52 bits of accuracy, not accounting for rounding errors. On most Unix operating systems, time_t is a 32 bit unsigned integer. This leaves only 20 bits to represent fractions of a second, and deal with rounding errors. For a decimal comparison, floating point is accurate to 14 decimal digits, and 32 bit integers are as large as 10 decimal digits. This leaves only four decimal digits of accuracy below second units.

      It is easy to keep seconds and microseconds as separate integers. Not only are the calculations much more precise, but this model improves future portability with time_t values that will likely be 64-bit unsigned integers.

      POSIX::mktime() can be implemented on other platforms, and in fact, is implemented on other platforms. Notably, the few tests that I performed using ActiveState Perl Build 633 on WIN XP seem to generate valid results.

      That all said, I personally often use Time::Local::timelocal(), and only rarely use POSIX::mktime(). I suspect this is more a case of habit, than anything else... :-)

        While I'm not certain about this, I seem to recall reading that loading POSIX is pretty cheap if you limit yourself to the imports you need. Or is that hogwash?

        At any rate, I'd just about always prefer Time::Local for the mentioned reasons.

        More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.
        -- William A. Wulf, A Case Against the GOTO

        Makeshifts last the longest.