in reply to Re^4: Efficient way to handle huge number of records?
in thread Efficient way to handle huge number of records?
This appears to be a job for a DB, if ...
See Item 3. The "if" is crucial.
Personally, I can see no logic at all in running a 32-bit OS on 64-bit hardware.
32-bit Perl on 64-bit OS has one advantage -- at least in the Windows world -- that of more XS modules build successfully. But even on windows, the stuff that doesn't tends to be either abandon-ware or the weird esoteric stuff like POE and Coro which either will never work with Windows or despite themselves if they do.
But for the most part, a 64-bit build of Perl on a 64-bit OS with 8/16/32/64GB of ram just makes doing anything involving the huge datasets that typify genomic work so much easier.
When you can pick up 8GB of ram for £29, it makes no sense to try and squeeze the large datasets typified by genomic work through 2 or 3GB memory pools.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^6: Efficient way to handle huge number of records?
by Marshall (Canon) on Dec 11, 2011 at 14:21 UTC |