in reply to Re^3: Efficient way to handle huge number of records?
in thread Efficient way to handle huge number of records?

I used BrowserUk's sub to generate the data. On the key part is was 40bytes long, but on the data part it was 320bytes, so I 'substr' it to 80 ...
$data = substr(rndStr(80,qw[a c g t]),0,80));

Sorry, but you must have typo'd or c&p'd my code incorrectly, because there should be no need to substr the output of rndStr():

sub rndStr{ join'', @_[ map{ rand @_ } 1 .. shift ] };; $x = rndStr( 80, qw[a c g t] );; print length $x, ':', $x;; 80 : actaatcttgcgccgcggcttcatacgagatgaatagtacgaaaacttggatacacctgtatcat +agaagggccgctgcg

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^5: Efficient way to handle huge number of records?
by flexvault (Monsignor) on Dec 16, 2011 at 14:46 UTC

    BrowserUk,

    I downloaded the code sample you provided, and it worked like a charm (after I moved it from dos format to Unix format).

    Originally I did a cut and paste and must have caused the problem...Sorry

    Thank you

    "Well done is better than well said." - Benjamin Franklin