in reply to Problems with defining hashes

This might be an easier way to avoid the floating-point problem for you hash keys:
# first, generate fixed-precision strings for magnitudes # and bin values: my @mag_keys = map { sprintf("%.1f", $_/10) } ( 50 .. 89 ); my @bins = qw/0 0.1 0.3 1 3 10 30 1000/; # now initialize hash bins for counting: # (this is probably unnecessary, unless you're re-using # the hash on multiple separate data sets) my %n; for my $mag ( @mag_keys ) { for my $bin ( @bins ) { $n{$mag}{$bin} = 0; } }
The next thing to watch out for, when actually counting things up, is to avoid using the "==" operator to test whether a floating point value from your input data matches a given hash key. Use only "<", ">", "<=", ">=" as needed, or else use sprintf on the data value to get it into the same precision as the hash key, then use "eq" (or "gt", "ge", "le", "lt").

I'm still scratching my head about the "0, 0.1, 0.3, ..." series -- that jump from 30 to 1000 seems odd.

Replies are listed 'Best First'.
Re^2: Problems with defining hashes
by Annemarie (Acolyte) on Mar 30, 2005 at 23:32 UTC
    As I just said to Flavio, I had completely forgotten about issues with floating point numbers. Your suggestions work beautifully and are brief. Thank you! The values 0.1 to 30 cover logarithmic time intervals during which I assume my data to be complete. 1000 is an arbitrary default value for aftershocks outside my time intervals. I could have used any other name, maybe 'outside'. The '0' collects foreshock data. I hope the explanation saves your head from being scratched. ;-)