The tests do a poor job of isolating any pure differences in execution speed of map, for, and pdl. Too much of the time differences can be attributed to other factors such as memory criteria.
But I see another issue here. If profiling determines that loops are your bottleneck, there are a number of approaches which may or may not be applicable to your situation:
- Find a better algorithm. If computational resources as described in Big-O notation can be moved along from one order of magnitude to some lesser order of magnitude, you win the battle. But better algorithms may not exist, in which case....
- Approximation: Is it acceptable to generate an approximation that is an order of magnitude less computationally intensive?
- Limit: Is it acceptable to generate only a portion of the solution? (Databases, for example, may use the "LIMIT 100" clause to prevent grabbing thousands of rows)
- Parallel: Can you fork four processes, each running on a separate core in a 4-core processor, each working on a portion of the solution?
- Caching: Can you generate the solution once and store it for future use?
- Distributed: Can you distribute the task among a larger group of machines, each working on a portion of the solution? (This is similar to parallel, but on a macro scale rather than a micro scale, to borrow terms from Economics).
- Get closer to the machine: Use inline-C for optimized execution of a few lines. However, Perl is already well optimized albeit usually in ways generically applicable. If you have the means of creating c-code that results in better optimization, you may gain efficiency. Note however, something that doesn't scale well in Perl probably doesn't scale well in C either. Think of it this way: If a car has a top speed of 70mph, and another of 80mph, your 80 mile drive is going to take 1hr8min in the slow car, and 1hr in the fast car. Meanwhile, Captain Kirk beamed himself there in 5 seconds because Spock developed a better algorithm.
- Computer speed: Along the same lines of "Coding in C" is "Get a faster computer." This may gain you a little, but there's a limit to how much you can get out of it.
- And way down at the bottom of the list comes decide between 'for' or 'map'.
This list is in no particular order, except that I listed 'finding a better algorithm' first because it's the purest option from a computer science standpoint. And I listed "choose between 'for' and 'map'" last, as I see it as the least rewarding (and least pure from a CS view) option.
I guess what I'm saying is that selecting 'for' versus 'map' is such a limited gain that it's really a last resort. Particularly since there's no guarantee that a future version of Perl wouldn't improve optimizations on one more than on another in a way that breaks your original assertions. If the efficiency different between 'map' and 'for' are not defined, they're not guaranteed to stay the same.