in reply to Speeding up point-in-polygon
Caching the results of your point-in-poly function will not yeild any performance gain unless you have duplicate points. And if you have duplicate points you'd almost certainly be better off sorting & grouping them so that you only look each point up once as the space requirements for the caching is non-trivial.
Personally, I'd pre-process the polys to group them by their xmin bound, and subgroup the by their ymin bound. That way, you would only do a full point-in-poly comparison of each point against a small subset of the total polys. Eg.
my %hashedPolys; for ( 0 .. $#polys ) { push @{ $hashPolys{ xmin( $polys[ $#polys ] ) % 100 }{ ymin( $poly +s[ $#polys ) % 100 } }, pop @polys; } ... for my $point ( @points ) { for my $poly ( @{ $hashedPolys{ $point[0] % 100 }{ $point[ 1 ] % 1 +00 } } ) { if( pointInPoly( $point, $poly ) ) { ## take whatever action is appropriate } } }
The choice of granularity (x & y) for the hashing (where I've used % 100) will depend very much on the size and shape of your total coordinate space and the maximum size (enclosing rectangle), of your polys.
|
|---|