in reply to reading dictionary file -> morphological analyser

Using a hash will speed things up, but unless you are using some persistant environment (mod_perl etc), then reloading the hash everytime will still be quite slow.

If moving your dictionary into a database is not immediatly possible for any reason, then you can speed things up a lot by simply splitting it into several files based on the first letter, so that you have 26 files named dict_A.txt, dict_B.txt, etc. That would (on average) make your program take less than 5% of the time currently takes:

if (length($input)>0){ print "<p><b>".$input."</b></p>"; my $firstLetter = substr( $input, 0, 1 ); open DICTE, "dict" . $firstLetter . ".txt" or die $!; while (<DICTE>){ chomp; ($english, $lang, $irreg, $clss) = split(/;/,$_); if ($input eq $lang){ print "<p>$english - $lang, $clss</p>"; } if (conj("$lang;present;1;singular") eq $lang){ print "<p>$english - $lang, $clss</p>"; } if (conj("$lang;present;2;singular") eq $lang){ print "<p>$english - $lang, $clss</p>"; } } }

And if that is still too slow, then you can divide the dictionary into smaller pieces by using the first two letters. dict_AA.txt, dict_AB.txt etc. which (on average) should reduce the search time to ~ 0.2% of the current.

Though there are probably few words begining with ZZ... or JJ... etc, so the speed up won't be exactly evenly distributed.

Moving your data into a simple tied DB like BerkeleyDB or similar, is a better long term solution, but this might help you out while you get to grips with Perl, hashes, modules and the rest.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."