in reply to Problem emulating built-ins like map and grep.

Btw, I seriously dislike your use of the binary operators for math there. You have much better options to gain speed without obfuscating the code. F.ex, if your @_ check never fails, your indiscriminate use will have cost you much more time than those two binary ops are ever going to save. Futhermore, you're mapping in void context - another waste of effort. (What was he thinking?? -- Ed.) Here is a version that I find more self documenting and more concise - and it even gets rid of one division entirely! :^)
#!/usr/bin/perl -w use strict; sub map2 (&@) { my $code = shift; if(@_ % 2) { require Carp; Carp::croak('Odd number of values in list'); } local ($a, $b); my @r; push @r, $code->() while ($a, $b) = splice @_, 0, 2; @r; } my %hash = qw/A B C D E F G H/; # let's remove the multiple calls to print() and # make the use of a map justifiable, shall we? print map2 sub { "key $a => value $b\n" }, %hash;
Update: I can't believe I didn't think about the list being returned. However, the while loop still avoids building a 1 .. @_ / 2 list, thus still saving effort.

Makeshifts last the longest.

Replies are listed 'Best First'.
Re: Re: Problem emulating built-ins like map and grep.
by BrowserUk (Patriarch) on Jan 22, 2003 at 22:50 UTC

    In truth, the use Carp; (and the associated test for oddness) was tacked in the just for posting, it would always appear at the top of the program in real code. Does use have a huge runtime cost? I was under the impression that use was effectively a compile time directive?

    Personnally, coming from a C-background, I find ... if @_ & 1; an eminently readable and obvious test for 'oddness'. It wasn't done as an optimisation, it's just the clearest test for oddness that I know of. YMMV.

    The version of the code in my private library doesn't contain the oddness test at all. Though I think I will go back and add it, it already has a use Carp; atthe top anyway. I'll still use @_ & 1 though :)

    the while loop still avoids building a 1 .. @_ / 2 list, thus still saving effort.

    Aren't you just trading a >> and building one list, for building a local array and the then converting that back to list to return it? I doubt there's much in it in performance terms either way, but using an unnecessary intermediate variable seems very un-Aristotle-like:)

    re: # let's remove the multiple calls to print() and make the use of a map justifiable, shall we?

    As you pointed out yourself in your earlier post, your alternative wouldn't have demonstrated the problem I was describing, which was the only purpose of the code.


    Examine what is said, not who speaks.

    The 7th Rule of perl club is -- pearl clubs are easily damaged. Use a diamond club instead.

      Aren't you just trading a >> and building one list, for building a local array and the then converting that back to list to return it?
      You are building two lists, one as the dummy input to map, and one as its return value. The while loop makes the latter explicit and renders the former superfluous. You may have a point in that there's a hidden second list built to return the array contents (I'm not sure about that) - that just means the solutions are even though.
      I doubt there's much in it in performance terms either way, but using an unnecessary intermediate variable seems very un-Aristotle-like:)

      You're right. And I say that because I don't like either solution much. I prefer the while approach better here since it reads more naturally, as opposed to the dummy list "hack" you need for map. All that said and done though, it really is an ugly shortcoming of Perl that all of the list oriented operators can only step through a list one element at a time. And because that's true for all of them, homegrown solutions are inevitably ugly too.

      Finally, it occured to me that if you insist on map, you could generate the dummy list more economically since you don't actually use its elements:

      map { ... } (1)x(@_/2); # or even map { ... } (undef)x(@_/2);
      Esp the latter will probably take a lot less memory to process very large lists (though still a significant amount).

      Makeshifts last the longest.

        Surprisingly, building a list from 1 (or 0 or any integer) using the x op seems to be appreciably (about 25%) faster than using undef?

        (1)x10000 is slightly faster 1 .. 10000, 4% or so. Presumably the need for an extra incr operator between stores, though on some architectures that have an auto-increment , the cost of a store would be the same as a incr-and-store.

        I can't detect any difference in the memory size. 408k for a list of 100,000 on my machine, which translates to 4 bytes per value plus a little overhead.


        Examine what is said, not who speaks.

        The 7th Rule of perl club is -- pearl clubs are easily damaged. Use a diamond club instead.