BrowserUk,
Since this was a "one and done" script, I didn't really care too much about this being as efficient as possible. I ended up just performing 2 binary searches (to find each end point) and then reverted to a linear search to handle the duplicates.
Regarding the statement: I don't believe it is possible to code a search over sorted data with duplicates that comes even close to be O(log N). Even in theory. And in practical implementations, it'd be far slower.
Why would the following logic be so much slower in implementation?
- Given: A sorted list containing duplicates
- Given: A target value
- Given: A desired anchor (closest to which endpoint)
- Find: The closest element to the desired anchor that is equal to or $desired_operator than the target value
- Perform a normal binary search to find the target item
- If not found, check if $list[$mid] < $val to determine if $mid has to be adjusted by one to meet $anchor/$desired_operator - done
- If found, proceed to step 2
- Determine if you are at "right" endpoint of the list of duplicates by checking the $list[$mid - 1] eq $val or $list[$mid + 1] eq $val
- If yes, done
- If no, proceed to step 3
- Check to see if this item is even a duplicate by checking $list[$mid - 1] eq $val or $list[$mid + 1] eq $val - whichever one not done in step 2
- If not a duplicate - done
- If a duplicate - proceed to step 4
- Use the following logic to find the first element in the desired direction that is not a duplicate. For the description, let's say I am trying to find the last element. $min = $mid from previous search and $max = $#list.
- If $list[$mid] eq $val, $min = $mid
- If $list[$mid] ne $val, $max = $mid - 1
- Stop when $max - $min < 2