perlquestion
JPaul
Greetings all;<BR>
I have a piece of code I've been fiddling around with thats designed to emulate natural speech, learning from users input. (Very simply, a learning chatterbox).<P>
I've been surprised by how much memory the data takes up, given how small it is when written to disk.
I use twin hashes, storing practically the same data, but in a different order. The script learns a sentence in two directions (front to back, back to front) so it can generate a sentence in either direction from a given keyword.<BR>
Right now each hash, on disk, takes up 727k (1.4M "brain") - but when loaded into the hash, takes up a remarkable 16M! (I've loaded the software without data to verify).<BR>
My hash is put together like so:<BR>
<CODE>
$VAR1 = {
'Word1_Word2' => {
'Sym1' => 3,
'Sym2' => 1
},
'Word3_Word4' => {
'Sym4' => 3,
'Sym3' => 1
},
'Word5_Word6' => {
'Sym5' => 1
}
};
</CODE>
For comparison, I write every entry to disk in the format:
<CODE>
Word1 \a Word2 \00 Sym1 \00 3 \n
</CODE>
Can you fine gentlemonks suggest a better way of storing data in memory, while also being easy to reference?<BR>
My thanks,<P>
JP,<BR>
-- Alexander Widdlemouse undid his bellybutton and his bum dropped off --