Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re: Optimising processing for large data files.

by water (Deacon)
on Apr 10, 2004 at 17:03 UTC ( [id://344137]=note: print w/replies, xml ) Need Help??


in reply to Optimising processing for large data files.

So I wanted to see how a sliding buffer compared to a simple push/shift. Maybe I implemented the sliding buffer in a slow way (suggestions?), but for my (likely poor) implementation, the simple push/shift is blazingly faster than the sliding buffer.

Did I blow the buffer implementation, or is native push/shift damn efficient? (probably both)

use strict; use Test::More 'no_plan'; use constant SIZE => 500; use Benchmark qw(:all); ################################################ # ROLL1: sliding buffer ################################################ { my ( $last, @x ); sub init1 { $last = SIZE - 1; @x = (undef) x SIZE; } sub roll1 { my ($val) = @_; $last = ( $last + 1 ) % SIZE; $x[$last] = $val; return \@x[&order]; } sub order { my $first = ( $last + 1 ) % SIZE; return ( $first .. SIZE - 1, 0 .. $last ); } sub dump1 { return join ( '-', @x[&order] ); } } ################################################ # ROLL2: simple push and shift ################################################ { my @x; sub init2 { @x = (undef) x SIZE; } sub roll2 { my ($val) = @_; push ( @x, $val ); shift @x; return \@x; } sub dump2 { return join ( '-', @x ); } } ################################################ # ensure both return the same results ################################################ for my $roll ( 5, 19, 786 ) { &init1; &init2; for ( my $i = 0 ; $i < $roll ; $i++ ) { my $val = rand; roll1($val); roll2($val); } is( dump1, dump2, "same results for $roll rolls" ); } ################################################ # benchmark them ################################################ timethese(100, { 'roll1' => sub { init1; roll1($_) for (1..10000);}, 'roll2' => sub { init2; roll2($_) for (1..10000);}, });

Replies are listed 'Best First'.
Re: Re: Optimising processing for large data files.
by BrowserUk (Patriarch) on Apr 10, 2004 at 17:16 UTC

    You're comparing apples and eggs--and dropping the egg basket:)

    I'll try to come up with a better explanation and post it tomorrow.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail
      Yah, I thought so. My "fast" algoritm is, what, 1000x slower? So obviously I missed something important, and missed it pretty badly. <g>

      I'd welcome any advice -- not that this matter for any pressing real project, but just to improve my skills.

      Thanks, browserUK, look forward to your post....

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://344137]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (3)
As of 2024-04-19 01:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found