Re: Syntactic sugar for tail call optimizations
by Khen1950fx (Canon) on May 27, 2011 at 04:30 UTC
|
#!/usr/bin/perl
use strict;
use warnings;
use Sub::Call::Recur qw(:all);
use Memoize;
memoize('sum');
sub sum {
my ( $n, $sum ) = @_;
if ( $n == 0 ) {
return $sum;
}
else {
recur ( $n - 1, $sum + 1);
}
}
foreach my $sum ( \&sum ) {
print $sum->(0, 0), "\n",
$sum->(0, 1), "\n",
$sum->(1, 0), "\n",
$sum->(1, 1), "\n",
$sum->(2, 2), "\n",
$sum->(10, 1), "\n",
$sum->(1000, 1), "\n";
}
It uses the Clojure - special_forms 'recur' to replace 'redo'.
| [reply] [d/l] |
|
|
Thank you for pointing me to Sub::Call::Recur, very interesting!
especially:
It can be thought of as the redo operator, but for subroutines instead of loops. :)
Actually I once read about recur in a Clojure book and wondered about the advantages to goto &sub
Clearly - and as you demonstrated - there are two:
1. One can use the "usual" call interface f(PARAS) instead of the cumbersome @_=(PARAS);goto &f
2. Memoize can't intercept and cache calls when goto-like constructions are used.
OTOH Sub::Call::Recur relies on XS code... so I think I'd rather prefer to chose between the need of memoization for repeated runs or for speed and stack-sanity of one time runs.
I think if I need both I could still realize my own memoization hash.
| [reply] [d/l] [select] |
Re: Syntactic sugar for tail call optimizations
by BrowserUk (Patriarch) on May 27, 2011 at 07:43 UTC
|
If you want fancier applications of this pattern please refer to the links I gave.
Here's an even simpler implementation. sub f4{ return $_[0] }
Of course, as soon as the recursive sub starts doing anything remotely requiring recursion your 'pattern' falls apart like bad knitting.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
|
|
| [reply] |
|
|
And of course you can easily provide an example proving your polemic assertion...
Of course:
use enum qw[ CODE GRAPH START PATH SEEN ];
sub _pathsFrom {
return $_[CODE]->( @{ $_[PATH] }, $_[START] )
unless exists $_[GRAPH]->{ $_[START] };
for ( @{ $_[GRAPH]->{ $_[START] } } ) {
if( exists $_[SEEN]->{ $_[START] . "-$_" } ) {
return $_[CODE]->( @{ $_[PATH] }, $_[START] );
}
else {
_pathsFrom(
@_[ CODE, GRAPH ], $_,
[ @{ $_[PATH] }, $_[START] ],
{ %{ $_[SEEN] }, $_[START] . "-$_", undef }
);
}
}
}
sub pathsFrom(&@) { _pathsFrom( @_, [], {} ) }
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
|
|
|
|
|
Re: Syntactic sugar for tail call optimizations
by Anonymous Monk on May 27, 2011 at 02:36 UTC
|
I frequently use redo as a more readable version of do/while. But in my off-the-cuff test using your functions, f1 is always about 50% faster than f2. I tested using perls 5.10 .. 5.14.
use Benchmark qw(cmpthese);
my ($a, $b) = (0,0);
my $limit = 10_000;
sub f1 { ++$a < $limit ? goto &f1 : return $a}
sub f2 {{ ++$b < $limit ? redo : return $b }}
cmpthese -1, { f1 => \&f1, f2 => \&f2 };
| [reply] [d/l] |
|
|
Bad benchmark. Out of the millions of passes of each sub, you only count up to 10,000 once. The only relevant result is dropped as an outlier, so you're benchmarking inc+compare+return vs loop+inc+compare+return. Of course the latter will be slower.
Also, you should probably include the naïve case.
use Benchmark qw(cmpthese);
my $limit = 10_000;
sub f0 { ++$_ < $limit ? &f1 : return $_ }
sub f1 { ++$_ < $limit ? goto &f1 : return $_ }
sub f2 {{ ++$_ < $limit ? redo : return $_ }}
cmpthese -1, {
f0 => sub { $_ = 0; f0() },
f1 => sub { $_ = 0; f1() },
f2 => sub { $_ = 0; f2() },
};
Rate f1 f0 f2
f1 245/s -- -15% -71%
f0 288/s 18% -- -65%
f2 836/s 241% 190% --
Rate f1 f0 f2
f1 212/s -- -22% -78%
f0 271/s 28% -- -72%
f2 954/s 350% 253% --
Rate f1 f0 f2
f1 218/s -- -23% -77%
f0 283/s 30% -- -70%
f2 932/s 328% 229% --
| [reply] [d/l] [select] |
|
|
use Benchmark qw(cmpthese);
use diagnostics;
my $limit = 10_000;
sub f0 { ++$_ < $limit ? &f0 : return $_ }
sub f1 { ++$_ < $limit ? goto &f1 : return $_ }
sub f2 {{ ++$_ < $limit ? redo : return $_ }}
cmpthese -1, {
f0 => sub { $_ = 0; f0() },
f1 => sub { $_ = 0; f1() },
f2 => sub { $_ = 0; f2() },
};
Deep recursion on subroutine "main::f0" at /home/lanx/B/PL/PM/tail_cal
+l.pl line
6 (#1)
(W recursion) This subroutine has called itself (directly or indir
+ectly)
100 times more than it has returned. This probably indicates an
infinite recursion, unless you're writing strange benchmark progra
+ms, in
which case it indicates something else.
Rate f1 f0 f2
f1 76.1/s -- -31% -73%
f0 111/s 46% -- -60%
f2 280/s 267% 152% --
Tail calls reuse the current call frame and thus avoid stack overflows in deep recursions.
I was able to make my system heavily swapping and unresponsive with a limit of 1000000.
| [reply] [d/l] [select] |
|
|