http://qs1969.pair.com?node_id=1165172


in reply to The "right" way to make your script run with old versions of perl

Unfortunately we have some servers with perl 5.6 and 5.8 on them. I want my script to be able to still run with the older versions, but if it is running something newer I want it to use an efficient module. [...] Is this really the best way to go? Just doing this ps[e]udocode here looks really messy to maintain.

I don't like that approach, for exactly that reason. Perl has excellent backwards compatibility. You don't have to rewrite all of your perl 5.005 scripts and modules just because your shiny new development machine features perl 5.40.98. They will run just fine, and probably quite efficient, perhaps even better than with the old perl version. There is a large perl code base, not only on CPAN, the Perl core developers are very aware of that fact, and try very hard not to break or drastically slow down old code.

If you read the various perldeltas, you will find that nearly each new perl version increases speed for various parts of perl by improving algorithms or better fine tuning.

use strict is just a tiny artefact of the backwards compatibility. Forcing strict mode on by default would be quite easy, but it would break old perl code. So you have to add use strict; to your new code. Same for use feature.

So, if your code must run on perl 5.6, write it for perl 5.6. Avoid all features that require a newer perl. You will get a single, maintainable code base, no extra tricks required. And it will run good on 5.6, and perhaps slightly faster with each new perl version.

One day, the last server running 5.6 will disappear, and you can define 5.8 as minimum, using the 5.8 features where you like. Wash, rinse, repeat.


How do you define "an efficient module"?

Lines of code required? I really don't care about that. Code is the least part of a project, easily burried under tons of documentation. (And documentation requirements get exponentially worse once you start working in medical or aerospace environments.)

Time required to process a job? CPU usage for the job? Memory usage? Electric power usage? How do you measure that? How big are the differences? If the "efficient module" saves you a few nanoseconds, a few bytes, or a few microwatt-hours per job, seconds, megabytes, watt-hours per day, compared to the "inefficient module", you are doing it wrong. You are micro-optimizing somewhere deeply in the noise floor, below any sane measurement.

Lines of code required to use a module? If it works great, but has a really garbage interface, write a thin wrapper that offers a sane interface to fix that once and forever.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)