Hi Tilly,
Thanks for your response. Yes we are looking at atleast 20 to 50 thousand responses, answering around 50-80 questions.
Also the same respondents could be asked more questions in the future, so the question database would go on increasing over time.
I looked at Data::Mining module, but it doesn't cater to what I need.
I am looking at a perl hash of hash right now and storing them into a file using Storage.pm or something and loading it in memory when required, to reduce the time.
There could be soooo many associations and rules, I am thinking of writing a rule-engine too, I researched on yagg yesterday.
It could be like
In year 1980 in city 'new york' maxSold fabrickType for age-group 20-25 ?
Can I store this historical data in the db? or I am better off using hashes. I preferred XML, as my hash would easily form an XML DOM Tree.
<results>
<result id='1'>
<year = '1980'/>
<sales>
<sale>
<location>New York</location>
<cotton>5.6</cotton>
<nylon>8.4</nylon>
</saled>
</sales>
</result>
</results>
I will look into PDL and Statistic Package as well.
Thanks All.
Sandeep