in reply to Re^2: How to improve speed of reading big files
in thread How to improve speed of reading big files
I would think first that the GRT3 sub would be the fastest using the index sorting. But apparently not.
There is a famous quote (that I cannot actually remember :), that says something to the effect that programmers intuition about where the bottlenecks occur in their code is usually wrong. I've found that this is never more true than with Perl.
As sort is essentially only shuffling pointers and aliases around, whether those point to short string or long ones makes little difference.
When you have the full record appended to the sort key, if the first N chars are identical, (as they often will be if all your records start with the date 'yyyymmdd' and you are selecting by the date), then the comparison will have to process a bit further through the string before deciding their relationship, than when you have the unique binary index appended to the sort key where it will go at most 4 bytes beyond the key. Making the index sort a little faster. However, you then have to do a full array slicing operation to reorder the records which is comparatively more expensive than scanning a few extra bytes on each comparison.
However i do not understand why the GRT2 sub is slower than the ST sub ?
It's all down to the number of Perl primitives I guess. One capture versus 4 in the regex, plus the math is enough to tip the balance in favour of the ST for lowish numbers (~6000 in my quick runs of your benchmark) of records.
However, your GRT2 implementation is flawed in a several ways:
pack ('NA*', ( $1*60 + $2 )*60 + "$3.$4" ) . $_
That should wither be pack ('NA*', ( $1*60 + $2 )*60 + "$3.$4", $_ )
or pack ('N, ( $1*60 + $2 )*60 + "$3.$4" ) . $_
However, I understand why you probably did that. The IEEE 754 format is in-part specifically designed to allow the binary representation to sort correctly. However, that does not take into account that the Intel native representation byte-swaps it. So, in order to sort packed binary doubles correctly, you have to undo the byte-swapping.
Why capture the seconds an microseconds as separate bits and then stick them back together, when you can capture them as a single piece!
GRT2 is better coded as:
sub sortLinesGRT2 { my $href = shift; return [ map { unpack 'x[d]a*', $_ } sort map { /(\d{2}):(\d{2}):(\d{2}\.\d{3})/o; scalar( reverse pack ('d', ( $1*60 + $2 )*60 + $3 ) ) . $_ } @$href ]; }
This corrects the errors and sorts correctly. Surprisingly, it also runs a bit quicker than your original, overtaking the ST at around 1500 records rather than 6000. But it is still way slower than the straightforward GRT implementation:
C:\test>for /l %i in (1,1,5) do @795905 -L=1e%i Rate GRT2 GRT3 ST GRT GRT2 15665/s -- -26% -26% -45% GRT3 21060/s 34% -- 0% -26% ST 21060/s 34% 0% -- -26% GRT 28550/s 82% 36% 36% -- Rate GRT2 ST GRT3 GRT GRT2 1687/s -- -14% -29% -48% ST 1969/s 17% -- -18% -39% GRT3 2390/s 42% 21% -- -26% GRT 3251/s 93% 65% 36% -- Rate GRT2 ST GRT3 GRT GRT2 166/s -- -1% -28% -47% ST 169/s 2% -- -27% -46% GRT3 232/s 40% 38% -- -26% GRT 312/s 88% 85% 34% -- Rate ST GRT2 GRT3 GRT ST 13.4/s -- -16% -39% -53% GRT2 16.0/s 20% -- -27% -44% GRT3 22.0/s 64% 37% -- -23% GRT 28.6/s 113% 78% 30% -- (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) (warning: too few iterations for a reliable count) Rate ST GRT2 GRT3 GRT ST 0.986/s -- -28% -45% -56% GRT2 1.38/s 40% -- -23% -39% GRT3 1.78/s 80% 29% -- -21% GRT 2.26/s 129% 64% 27% --
Update: Finally, you are currently parsing the time twice: once in the filter sub and again for the GRT. If you prepended the sort key within the filter sub, you could avoid another complete pass of records.
|
|---|