I am awful suspicious of my tests taking almost the same amount of time to run no matter how many elements there were in the dataset.my @size=(100,1000,10000,100000); foreach(@size){ my $size=$_; print "size=$size\n"; my $now = 8; my %url = ( monday => { @{[map(($_,1), (1..$size))]} } ); timethese(0, { Grep => q{ $now = (sort grep {$_ <= $now} keys %{$url{'monday'} +})[-1]; }, Ternary => q{ $now = ($now < $_ && $_ < 8 ? $_ : $now) for keys + %{$url{'monday'}}; }, Max => q{ foreach ( keys %{$url{'monday'}} ) { $now = $_ if $_ +> $now }; } }); }
I think I am onto something... I don't think the code posted in teh original question really benchmarks what the user thinks it does... I'm crafing a new reply to the original question that should be up soon.
In reply to RE: RE: Re: Algorithm Efficiency vs. Specialized Hardware?
by lhoward
in thread Algorithm Efficiency vs. Specialized Hardware?
by Russ
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |