in reply to Re^8: Re-orderable keyed access structure?
in thread Re-orderable keyed access structure?

Sure, if k1 is much smaller than k2, O( k1 n ) will be smaller for small values of n than O( k2 log n ). Using builtins is a good way of getting very small values for k1, and I've asserted many times that this is a sensible optimization goal in Perl, even recently.

But with n growing, the constants eventually become irrelevant. Since BrowserUk claims to be unable to hold all of his data in memory, I would assume this is such a situation. Even Perl's builtin splice won't move 100,000 elements down one position faster than spelled-out Perl code would swap 17 (≅ log2 100_000) elements.

Makeshifts last the longest.

  • Comment on Re^9: Re-orderable keyed access structure?

Replies are listed 'Best First'.
Re^10: Re-orderable keyed access structure?
by tye (Sage) on Aug 16, 2004 at 04:34 UTC

    Interesting that you pick 100,000. That was about the break-even point in some quick benchmarks. On one system the heap was a litle faster at that point, while on one they were neck-and-neck (or else splice was twice as fast, depending on the test).

    Moving one more chunk of memory when moving a bunch of them already is extremely fast (it is usually one assembly language instruction to move the entire region). Dispatching Perl opcodes is surprisingly slow. So the constants involved here are very large.

    But you are correct, the heap will eventually win when the dataset gets large enough.

    I'll try to include the code I used for testing when I get access to it again...

    - tye