in reply to parsing textfile is too slow

I'm really not that sure about what you're trying to do, but at least one thing you could do to give a slight boost in speed, is to avoid the assignment of:
$unihan{$_}[82] = $unihan{$_}[82];
That's a totally wasted op, unless I'm missing something.
Depending on the size of the %unihan hash, you may or may not be able to save some time by changing:
foreach (keys %unihan) { to: while ( ($key,$val) = each %unihan) { }
keys will create a new list containing all the keys. If your hash is large, this can be an expensive proceedure. You might also try something similar to replace the join, depending on the sizes of things you're dereferencing.
Anyway, in a nutshell, the more records you have in a hash/arrayref, the more it hurts to copy them (dereference, keys/values/ect).

Update: fixed one code example (oops), thanks QM

Replies are listed 'Best First'.
Re^2: parsing textfile is too slow
by QM (Parson) on Aug 17, 2005 at 18:27 UTC
    Shouldn't
    while ($key,$val) %unihan { }
    be
    while ( ($key,$val)= each %unihan ) { }
    ??

    -QM
    --
    Quantum Mechanics: The dreams stuff is made of

      Oops, yes, corrected it. I knew something didn't look right, but I had just written it off to the fact I don't use each() very often. Thanks.