You'll probably be better off with some of the datastructures suggested here, but this is the solution I've thought of:
Assume that your elements are in the array @segements. Sort the data by length (sort { length $a <=> length $b } @segements;) (I'm not sure exactly what algorithm perl uses for sort, but I'm sure it's fairly efficent). You can now use a regular slice to get your data (my @range = @segements[50000 .. 100000];) and then scan through @range to lop off any elements that are too big or small. A simple grep will work, but if you want to be really efficent, you can process the data in two foreach loops and stop when you've hit your desired range, like this:
my ($start, $end); foreach my $i (0 .. $#range) { if($range[$i] >= 50000) { $start = $i; last; } } foreach my $i ($#range .. 0) { if($range[$i] <= 100000) { $end = $i; last; } } my @wanted_data = @range[$start .. $end];
You could use these two loops without the initial sort and splice, but doing the sort and splice could significantly reduce the ammount of linear scanning done.
----
I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
Note: All code is untested, unless otherwise stated
In reply to Re: indexing segments
by hardburn
in thread indexing segments
by glwtta
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |