in reply to Re^4: Huge files manipulation
in thread Huge files manipulation

And that shows the weakness of your approach. It requires a priory knowledge about the keys.... You'd need to tune your program for different datasets.

That's not a weakness--it's a strength. It is very rare that we are manipulating truly unknown data. Using a tailored solution over a generic is often the best optimisation one can make.

Especially as it only takes 18 seconds of CPU (~1 elapsed minute) to get the information to decide a good strategy:

[17:30:50.77] c:\test>perl -nle"$h{substr $_, 0, 1}++ } { print qq[$_ ; $h{$_}] for sort keys %h" huge.dat a ; 338309 b ; 350183 c ; 579121 d ; 378275 e ; 244480 f ; 262343 g ; 195069 h ; 218473 i ; 255346 j ; 53779 k ; 42300 l ; 182454 m ; 315040 n ; 126363 o ; 153509 p ; 475042 q ; 28539 r ; 368981 s ; 687237 t ; 303949 u ; 162953 v ; 92308 w ; 155841 x ; 1669 y ; 18143 z ; 10294 [17:32:42.65] c:\test>

Sure, you could add code to the above to perform that as a first pass and then some bin packing algorithm or other heuristic to try and determine an optimum strategy, but unless you are doing this dozens of times per day on different datasets, it isn't worth the effort. But 5 minutes versus 25 is worth it.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."