I like the data structure above because it is easy to understand and I can add new elements to the array as the script becomes more sophisticated. I included only 3 of 15 variables, and I expect the number of variables to increase a little more. 2) Once the array is fully populated (millions of entries) and I need to pluck data from it, I need to do two things: a) *always* numerically sort by RegionNum first and then b) numerically sort some other key. Here is an example:-push (@StatsArray, {RegionNum=>$RegionNum, AR=>$AR[$RegionNum], BCR=>$BCR[$RegionNum]});
Again, easy to follow. The problem is there are many different region numbers and I am not breaking down the data into smaller chunks. For example, although there may be 1 million entries in @StatsArray, all of the entries are made up of, say, 10 region numbers with 100,000 entries in each one. So instead of sorting only 100,000 entries at a time, I am always sorting 1 million entries at a time. A lot of separate sorts are issued, hence the runtime problem. I have two questions:- 1) How can I reformat the StatsArray to be dependent upon region numbers and make sorting faster? 10 StatsArrays instead of 1 would be fine. 2) If a new data structure is proposed, how would I numerically sort something like the AR key? This is really what I am struggling with the most: ways to sort only 1 of n keys in an array when its data structure gets more complicated. I do not use arrays like this very often, so any suggestions would be much appreciated. Thanks! -fiddler42foreach $RegionCoords (@AllRegionCoords) { $RegionNum++; foreach (sort {$$a{RegionNum} <=> $$b{RegionNum} or $$a{AR} <=> $ +$b{AR}} @StatsArray) { $RegionKey = $$_{RegionNum}; $ARKey = $$_{AR}; if ($RegionKey == $RegionNum) { # do stuff with sorted AR data in region n } } }
In reply to How to improve this data structure? by fiddler42
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |