in reply to Re^14: [OT] The interesting problem of comparing (long) bit-strings.
in thread [OT] The interesting problem of comparing bit-strings.

That's fine, i'm in exactly the same position.

Mine seems to work most of the time, but I've occasionally seen long patterns found at offsets other than (earlier) than expected.

Given I was using random data, it is a possibility, but with needles of 100s or 1000s of bits (extracted from the randomly generated haystack), you wouldn't expect it to happen with any frequency in a human lifetime -- even in a billion bits of haystack -- and I've seen it half a dozen times already.

Of course, it only ever happens when both haystack and needle are huge; when, even if I did dump the bits for manual inspection comparing thousands of 0s & 1s by eye is just too painful. (I did try it once!)

Hence, I went looking for a better test strategy -- DeBruijn sequences -- which took rather longer to get right than I'd like to admit. (Would have been easier on a big-endian processor!)

That -- last night -- allowed me to confirm that there are some circumstances when I get false hits -- it seems to be related to __shiftleft128() treating a shift value of 64 as 0!

So now I'm recoding the entire thing in an OO style so that I don't have to juggle so many different offsets, shifts and counts in the mainline code. But I only just started.

Bottom line: instead of continuing to post "Boyer Moore would be faster!" - "No it won't!" - "Yes it will!"; how about we wait until we're both ready and compare our actual code.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
  • Comment on Re^15: [OT] The interesting problem of comparing (long) bit-strings.

Replies are listed 'Best First'.
Re^16: [OT] The interesting problem of comparing (long) bit-strings.
by salva (Canon) on Apr 01, 2015 at 14:18 UTC
    I don't think I would be able to work on my B-M implementation over the next days, so, here there is what I currently have: bitstrstr.c.

    I think the B-M side of the algorithm is pretty done, what remains is converting the slow_check function into a fast_check one.

    Then, there are edge cases as reading bytes out of boundaries and things like that which should be checked.

      Okay salva. Understood. This stuff is hard isn't it.

      If I ever get all the edge-cases covered in mine, I'll try and understand yours enough to finish it off. (I hope you'll field a few questions; cos after a first glance, there'll be a few :)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

      I have mine working and pretty well tested, if you're interested to see it?

      I've started looking at your B-M version -- which doesn't look like any B-M implementation I've ever seen? -- and ... well, expect questions :)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
        Answers to questions:

        oiskuu says: re: bitstrstr. That looks like Horspool. (Most B-M simplifications have dropped the "good-suffix" shift and kept the "bad-character(s)" shift).

        Yes, it is actually Boyer-Moore-Horspool.

        I still have to come with a way to implement the good-suffix tables without incurring into a 32*m (or 64*m) memory usage which I consider unacceptable. Using the delta compression would reduce it to 8*m. Maybe it can be done at the byte level and then it would come to 1*m... the thing is that I like the O(1) memory consumption of the B-M-H.

        Where does the delta compression idea come from?

        It is my own. Trying to keep all the delta information in the cache.

        Currently, on the GitHub repo there are three variants of the algorithm: the "master" branch that tries to work at byte boundaries when looking for the bad-character shift; the "simplify" branch, that works at bit level and the "caching" one (implementing the delta compression) that tries to be cache friendly.

        I still don't know if there will be any effect in performance. I think there would be edge cases where having a precisse delta would help but I don't know if those are likely to appear on real-life data.

        The same happens when deciding if running the bad-character test at byte boundaries first or just working at the bit level. The former removes a memory load (very likely from the processor cache), and a shift operation.

        I was considering another variation: working at the byte level, and then performing 8 parallel bitstring comparisons when delta < 8 bits (or even working with uint16_t units, and performing 16 comparisons in parallel).

        BrowserUk says: did you start with a (simple byte-wise) Boyer-Moore implementation cribbed from somewhere?

        No, I started from scratch.

        BrowserUk says: I'm confused as to the difference between needle_offset & needle_prefix?

        needle_prefix is just a hack for testing byte-unaligned needles.

        BrowserUk says: If you have a brief explanation of the values in the delta table it might help. I tried looking at them in hex & binary to understand their purpose; but nothing leaps off the page at me.

        It is (mostly) an exponential succession used to reduce the size of the B-M-H delta table to something that fits into the L1 cache. The script used to generate it is also in the repository.

        The jump table contains indexes (uint8_t) into the delta table (uint32_t). That way, for a 14bits window size, the function uses 1*(1<<14) + 256*4 bytes = 17KB of working memory that fits in the 32KB L1 cache of current x86/x86_64 CPUs.