When I say "amortized average O(1)" I wasn't saying anything about the size of the constant. Merely that it is a constant.
Certainly I wouldn't expect entering the push op to be free. Entering a for loop isn't free either. Doing both 100 times is more expensive than doing them once. Reallocation is also not free. Amortized constant is not free.
I don't know whether you're reliably measuring the overhead due to reallocating. I guarantee you that the large array has at most one reallocation. (When you reallocate you reserve a buffer that is 1/5 the length of what you need for extra space. 1/5 of 130900 is large enough that we're NOT doing that twice.) From the figures I suspect that you are, and suspect that if you reversed pushing by 1 and pushing by 100 that you'd see an interesting change. (I can't run this since I don't have Benchmark::Timer, and I'm not going to bother installing on a machine that is being booted from Knoppix.)
Also note that when I say "amortized constant" I mean that if you build a large array by pushing one element at a time, the sum of the costs of reallocating come out to (order of magnitude) a constant times the length of the array. However the distribution of reallocation costs is very uneven, you'll see a pattern where as time goes by the odds of a given push having a reallocation go down, but when it happens it costs a lot more. Therefore a test like yours is the wrong way to test my assertion - you want to compare how long it takes to build up arrays of different lengths and see if the times follow a linear pattern.
In reply to Re^5: Producing a list of offsets efficiently
by tilly
in thread Producing a list of offsets efficiently
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |