in reply to Re^2: How to improve this data structure?
in thread How to improve this data structure?

If they're sequential, it should be @StatsArray, in which case, @{$StatsArray[$RegionNumber]} would be appropriate, and probably even a little faster since array index lookups require less constant-time to achieve than hash lookups.

Here's a sort of loose and dirty explanation of why you get such a good speedup here. Let's assume that your original @StatsArray had 1_000_000 entries, and that there are ten regions, each of which has 100_000 entries.

Your original approach was sorting 1_000_000 entries. Sort is an O(n log n) operation, so we can say that there were approximately 1M * log(1M) units of work going on.

The grep approach helps because grep is an O(n) operation. So you walk through the million item list one time, and pull out 100_000 entries. Then you sort the 100_000 entries. So you have 1M + ( 100K * log(100K) ) units of work, approximately.

My approach eliminates the need for the grep. So you do away with the "1M" units of work, and are left with 100K * log(100K) units of work.

This is really a rough approximation of what's going on, but fits fairly well, and I think should help to explain why you see such an improvement.

The database approach would still scale better, so that you don't have to rewrite the code when 1_000_000 entries becomes 100_000_000. ;)


Dave

Replies are listed 'Best First'.
Re^4: How to improve this data structure?
by fiddler42 (Beadle) on May 21, 2013 at 22:10 UTC
    My apologies: when the data file is parsed and the @StatsArray is populated, region numbers will be totally random. So the region numbers are sparse, hence @{$StatsArray{$RegionNumber}} works. (Each region is ultimately *processed* sequentially after I am done with the data file.)

    Thanks for the explanation, too. I have found hashes of arrays of hashes a little confusing in the past, but I finally have a good, functional example to leverage for future efforts.