What are the performance impacts of this?
It would have to be measured, i.e. bench-marked, with real data.
However, my gut feeling is that removing one level of nested-ness is likely to speed up things a bit, but probably not by a large margin. I doubt that you really care about the difference for what you're doing. So, don't worry too much about performance, unless you really have to.
The hash solution (especially with concatenated keys) is very likely to use far less memory, at least with sparse data. Suppose you've got only one data point with coordinates (800, 1200). With an array of arrays, you have to allocate essentially 800 * 1200 array slots, that's quite a lot or memory for just one data piece. But with a hash you need to allocate only one or two hash entries; even considering that a hash entry uses more memory than an array entry, there is a significant win here.
| [reply] |
I may go with this method actually since I see the memory benefits should be quite large. What's more, to get the list of objects all I need to do is get the keys and sort them by the $x value, reducing algorithmic complexity a ton from a full array search. Thanks.
| [reply] |
but it doesn't seem clearer
Granted, but it makes things simpler (and easier) if you need to traverse your entire data structure. You essentially get a better data abstraction if you think in terms of "location", rather than "x-y coordinates".
| [reply] |
I should probably note here that since I do have a sparse data set, the array members are undefined until needed. | [reply] |