in reply to Efficient way to handle huge number of records?

If you're only doing exact matches on the sequence tags, a database is overkill, and will probably be slower than an in-memory hash. If a hash mapping tags to sequences is too big, you can save quite a bit of space by (once) mapping tags to file offsets in your original file (untested):
use Storable; open IN, "input.fa"; my %id2off; while (<IN>) { next unless /^>/; chomp; $id2off{substr($_, 1)} = tell(IN); # file pointer points to start # of next line } store \%id2off, 'input.idx';
Then later (many times) using this index to retrieve the sequence data (also untested):
use Storable; $id2off = retrieve 'input.idx'; open IN, 'input.fa'; while (<>) { chomp; next unless exists $id2off->{$_}; print ">$_\n"; seek(IN, $id2off->{$_}, 0); # move file pointer to start of sequen +ce while (<IN>) { # print until we reach next sequence last if /^>/; print; } }
Sorting your data won't really work so well since, with variable-length records, you have to do a linear scan of the file to find the linebreaks.

Replies are listed 'Best First'.
Re^2: Efficient way to handle huge number of records?
by Anonymous Monk on Jul 11, 2013 at 22:28 UTC
    See Bio::DB::Fasta for this implementated already.