in reply to Re: Sorting Gigabytes of Strings Without Storing Them
in thread Sorting Gigabytes of Strings Without Storing Them

The problem with that is it takes far longer ( 3 1/2 minutes vs. 10 seconds) than using the system sort utility:

[12:47:29.37] c:\test>junk4 ACGT34.dat >nul [12:50:49.45] c:\test> [12:53:00.73] c:\test>sort ACGT34.dat /O nul [12:53:09.51] c:\test>

And that is just a million 34-char strings, not "2GB". Admittedly, most of the time is spent reading, packing and unpacking the data, rather than in your fine sort routine which takes less than half a second to do its work.

Actually, part of problem is building up that big scalar piecemeal. If you watch the memory usage as that while loop repeatedly expands the $packed, you'll see something like this. Those transient spikes in the memory usage are where Perl/CRT has to go to the OS to grab ever larger chunks of ram into which to copy the slowly expanding scalar, and then frees of the old chunk once it has copied it over. That constant allocation, reallocation and copying really hampers the in-memory approach.

You can knock around 30 seconds off the 3.5 minutes by preallocating the memory. In this version of your code, I use a ram file to allocate a chunk bigger than required, populate it by writing to the ram file, and then truncate the string to its final size using chop which avoids the seesaw affect on the memory allocation:

#!/usr/bin/perl use strict; use warnings; use Sort::Packed qw(sort_packed); my $packed = ''; open RAM, '>', \$packed; seek RAM, 10e6, 0; print RAM chr(0); seek RAM, 0, 0; my ($len); my %val = (A => 0, C => 1, G => 2, T => 3); my %rev = reverse %val; sub compress { my @data = split //, scalar(reverse shift); my $out = ''; for (my $i = 0; $i < @data; $i++) { my $bits = $val{$data[$i]}; defined $bits or die "bad data"; vec($out, $i, 2) = $bits; } scalar reverse $out } sub decompress { my $data = reverse shift; my $len = shift; my $out; for (my $i = 0; $i< $len; $i++) { $out .= $rev{vec($data, $i, 2)} } scalar reverse $out; } while(<>) { chomp; ($len ||= length) == length or die "bad data"; print RAM compress $_; } close RAM; chop $packed while length( $packed ) > $. *9; my $bytes = int(($len * 2 + 7) / 8); my $n = length($packed) / $bytes; sort_packed "C$bytes" => $packed; for (my $i = 0; $i < $n; $i++) { print decompress(substr($packed, $i * $bytes, $bytes), $len), "\n" +; } __END__ [13:50:49.29] c:\test>junk4 ACGT34.dat >nul [13:53:44.53] c:\test>

The result is that the overall time taken is reduced to just under 3 minutes, which leaves the bulk of the time spent packing and unpacking the data. And I cannot see any easy way of speeding that up.

Maybe Perl needs a pack template for dealing with genomic data? Trouble is, there are several variations. As well as ACGT, they also use forms which contain 'N' 'X' and a few other characters for different situations.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."

Replies are listed 'Best First'.
Re^3: Sorting Gigabytes of Strings Without Storing Them (bytes)
by tye (Sage) on Dec 24, 2008 at 08:01 UTC

    Just off the top of my head, I'd probably try replacing the 'pack' stuff by producing whole bytes at a time. Something like:

    use Algorithm::Loops qw< NestedLoops >; my @bases= qw< A C G T >; my @quad= map { join '', @$_ } NestedLoops( [ ( \@bases ) x 4 ] ); my %byte; @byte{ @quad }= map pack("C",$_), 0 .. %#quad; my $carry= ''; while( <> ) { chomp; substr( $_, 0, 0, $carry ); my $pack= ''; s/(....)/ $pack .= $byte{$1}; '' /g; $carry= $_; print RAM $pack; } print RAM $byte{ substr( $carry . 'AAA', 0, 4 ) } if $carry;

    But I haven't looked at the rest of this thread recently nor actually tried my suggestions. I can think of lots of different ways to pull out 4 bases at a time and some ways might have a noticeable impact on speed.

    - tye        

      Hm. I can't get this to work. There's a couple of obvious things:

      1. %#quad; => $#quad;
      2. /ge

      But the main problem is my long standing trouble with your NestedLoops. You're passing in an anon array of four references to an array containing ACGT: [ ( \@bases ) x 4 ], but that results in Not an ARRAY reference... from the @$_ within the map, and I can't see how to put that right?

      Also, for a proper comparison, you;d need a 'decode' operation.

      Generally, I have tried various pure perl methods of packing and unpacking genomic data--there's one of my previous attempts here, and there are others kicking around--but given it would require saving 3 minutes (on 1e6 cycles) during the packing and unpacking to allow salva's in-memory sort approach to compete with the external sort, I fear that none of them are going to achieve much.

      A pair of Inline::C routines might achieve sufficient performance to make an in-memery approach viable, but given the shortness on the strings and the need to transition perl->C->perl for every 34 chars, even that seems unlikely. If the inputs were always multiples of 4-bytes and always 4-byte aligned, maybe. But dealing with the edge cases where those two conditions cannot be guarenteed makes things much slower.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.