in reply to Long list is long
I believe there is actually an internal optimization for sorting an array in-place, so @foo= sort { ... } @foo; should execute faster than your example because the array never actually goes onto the perl argument stack. Of course, that doesn't really fit your data structure too well.
You could try emptying the hash to free up memory using undef %f; after you pull out all the keys and values. That should cut your ram usage by half before the sort step. (but maybe too late to avoid running into disk swap)
I'd be curious if you could make use of Tree::RB::XS here, to avoid ever having to expand a full list of the items:
use Tree::RB::XS; for(@files){ open IN, $_; while(<IN>){ /^( [^\t\n]+ )\t( [0-9]+ )$/x; # avoid backtracking $f{$1}+=$2; } } my $t= Tree::RB::XS->new(compare_fn => 'int', allow_duplicates => 1); while (my ($word, $count)= each %f) { $t->insert($count, $word); } my $iter= $t->iter; printf OUT "%s\t%s\n", reverse $iter->next_kv while !$iter->done;
I apologize that I'm too lazy to generate some hundred-gigabyte files to benchmark this theory on my own.
If none of these keep the memory usage low enough to be useful, I would suggest trying a database. You could then run the query that sorts them and iterate the rows using DBI. There's a huge overhead in putting them into a database, but if you re-use this data then the database index would save you a lot of computing power in the long run.
|
|---|