in reply to Re: sorting type question- space problems
in thread sorting type question- space problems

Nice solution , thnx!!

However, the records are not <= 100 bytes they are much larger. there are abbout 5 mil records (lines) which sum up to 20GB in total. Some records are big > 5MB and some are small (5 bytes) so in-memory sort is not an option :(

  • Comment on Re^2: sorting type question- space problems

Replies are listed 'Best First'.
Re^3: sorting type question- space problems
by mbethke (Hermit) on Sep 14, 2013 at 19:42 UTC

    First, RickardK's solution of using sort(1) makes sense. If it's there, use it, as it's written in C, highly optimized and well tested.

    That said, if your keys tend to be small compared to the rest of the records, in-memory sort may be feasible by recording only the keys and the file offsets of their respective lines:

    my @a; open my $fh, "<","input" or die $!; while(1) { last if eof($fh); my $pos = tell($fh); my ($k1,$k2) = split /\s+/, <$fh>; push @a, [$k1, $k2, $pos]; } foreach(sort { $a->[0] cmp $b->[0] or $a->[1] cmp $b->[1]} @a) { seek($fh, $_->[2], 0); print scalar <$fh>; }

    Probably not the fastest, but if you want to avoid external sorts both in the sense of shelling out and tempfiles, it may be worth a try. At 5M lines it will likely break the 100 MB limit but without tricks like assuming something about characters that don't occur in the keys and thus encoding all the keys and offsets in one string it's unlikely to get much smaller.

Re^3: sorting type question- space problems
by BrowserUk (Patriarch) on Sep 15, 2013 at 13:55 UTC
    Some records are big > 5MB and some are small (5 bytes) so in-memory sort is not an option :(

    Actually, it is. Or at least, it may be so ... if the lengths of the keys in your OP is somewhat representative of your real data; and you have a couple of GB of ram available.

    The lengths of the records is immaterial; the 5 million records can be represented in memory by the two keys + an (64-bit) integer offset of the start of each record's position within the file. If the two keys are less than ~8 bytes each, then the total memory requirement for 5 million anonymous subs each containing 2 keys + file offet is ~ 1.8GB.

    This code builds an index of the two keys + file offset in an AoA; sorts that AoA in memory and then writes the output file by seeking the input file, reading the appropriate record and writing it to the output file. For a 5 million record file it takes a little over 2 minutes on my machine:

    #! perl -sw use strict; open IN, '<', $ARGV[0] or die $!; my @index; my $pos = 0; while( <IN> ) { my( $k1, $k2 ) = m[(^\S+)\s+(\S+)]; push @index, [$k1, $k2, $pos]; $pos = tell IN; } @index = sort{ $a->[0] cmp $b->[0] || $a->[1] cmp $b->[1] } @index; open OUT, '>', 'sorted'; for my $r ( @index ) { seek IN, $r->[2], 0; print OUT scalar <IN>; } close OUT; close IN; __END__ C:\test>dir unsorted 15/09/2013 14:40 117,501,066 unsorted C:\test>wc -l unsorted 5000000 unsorted C:\test>head unsorted key25 key15 xxxxxxxxxxx key28 key05 xxxxx key30 key18 xxxxxxxxxxxxxx key24 key03 xxxxxxxxx key41 key01 xxxxxxxxxxxxxx key12 key16 xxxxxxxxxxxx key38 key20 xx key19 key19 xxxxxxxxxxxxxxxxxx key30 key13 xxxxxxxx key16 key19 xxxxxxxxxxxxx [14:41:03.25] C:\test>1054101 unsorted [14:43:19.59] C:\test> [14:44:38.83] C:\test>head sorted key01 key01 xxxxxxxxxxxxx key01 key01 xxxxxxx key01 key01 x key01 key01 xxxxxxxx key01 key01 xxxxx key01 key01 xxxxxx key01 key01 xxxxxxxxxxxxxxxxx key01 key01 xxxxxx key01 key01 xx key01 key01 xxxxxxxxxx

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
    Science is about questioning the status quo. Questioning authority