in reply to Re^9: Our perl/xs/c app is 30% slower with 64bit 5.24.0, than with 32bit 5.8.9. Why?
in thread Our perl/xs/c app is 30% slower with 64bit 5.24.0, than with 32bit 5.8.9. Why?

they would then need to cause the server to generate a set of headers that provoked the pathological behaviour.
Perhaps I didn't make it clear. It didn't require a specific set of headers. The point of the random key generator script was to demonstrate that it works with any set of headers.
And, how many web servers would still be running that same perl process, with that same random seed 15 minutes later
That 15 minute time wasn't optimised code - it was just proof of concept. I'm sure it could be made much, much faster. It is also parallelisable. And what you do is open a TCP connection to a server process, send one request, keep the connection open, then calculate the seed, then send a second request which DoSes the server. Also, depending on how the perl processes are spawned/forked, they may all share the same hash seed.
In any case, my comment about "unnecessary" was little more than a footnote
But you've spent an awful lot of time since trying to convince anyone who will listen that it isn't a security issue, and you've been shown repeatedly that the assumptions you based this conclusion on were erroneous.

Dave.

  • Comment on Re^10: Our perl/xs/c app is 30% slower with 64bit 5.24.0, than with 32bit 5.8.9. Why?

Replies are listed 'Best First'.
Re^11: Our perl/xs/c app is 30% slower with 64bit 5.24.0, than with 32bit 5.8.9. Why?
by BrowserUk (Patriarch) on Dec 22, 2016 at 18:00 UTC
    But you've spent an awful lot of time since trying to convince anyone who will listen that it isn't a security issue,

    Hm. Seems to me you've expended a lot of time attacking my opinion.

    All I've done is spend a little downtime politely and respectfully responded to your inflammatory comments.

    you've been shown repeatedly that the assumptions you based this conclusion on were erroneous

    No. Far from it. You've described a toy process that can brute force a conclusion when fed with a relatively large number of unrealistically short keys.

    What you patently failed to demonstrate is how that knowledge can be used to do anything bad.

    Yes, you can use knowledge of the seed to construct a set of keys that could induce pathological behaviour if used to construct a hash, but you simply omitted to address the problem of how are you going to persuade the server to construct a hash from the set keys you've generated.

    As I said, a purely theoretical problem that has never and will never be demonstrated in the wild; addressed in a clumsy and over-engineered fashion.

    But that just my opinion; it won't change a thing and seems hardly worthy of you're esteemed time to argue with; but here we are 11 levels deep.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Hm. Seems to me you've expended a lot of time attacking my opinion.

      All I've done is spend a little downtime politely and respectfully responded to your inflammatory comments.

      Oh sigh. Your very first post on this topic made an unevidenced assertion that the hash-related security fixes done in 5.18.0 were both unnecessary and may have significantly slowed the perl interpreter. All I have done is patiently and politely tried to demonstrate that your assertions were incorrect. And it seemed to me to be important to correct this assertion as (assuming I was right), people could be mislead into thinking something insecure was secure.

      Somehow that is inflammatory.

      Yes, you can use knowledge of the seed to construct a set of keys that could induce pathological behaviour if used to construct a hash, but you simply omitted to address the problem of how are you going to persuade the server to construct a hash from the set keys you've generated.
      That's utterly trivial. You send a HTTP request to the server with a bunch of headers or parameters containing the generated keys. If the server creates a hash from the input, you've done it. Or you could supply some JSON to a server that processes JSON input. Etc etc.

      And remember that most of the recent discussion above has been about servers leaking the hash seed. That isn't the only (or main) thing fixed in 5.18.0. The biggie was that if you supplied a suitable small set of keys in a request (no need to know the server's seed) you could force the perl process to allocate Gb's worth of memory. Also I think there were issues with the existing way algorithmic complexity was protected against, but I don't remember the details now.

      And this is isn't just about HTTP servers. Any perl process that gets hash keys from untrusted input used to be vulnerable to algorithmic complexity attacks. Think of a spam filter that reads an email's headers into a hash for one hypothetical example of many.

      addressed in a clumsy and over-engineered fashion.
      Patches welcome....

      Dave.

        Patches welcome....

        I tried to resist, but: that's a joke right?

        I put the odds of my getting a patch to perl5 accepted at some where between: in a month of Sundays; and: when Hell freezes over.

        Been there, done that. Torn the tee-shirt in frustration at the petty-mindedness of the process.

        Twice bitten, thrice shy.

        Your very first post on this topic made an unevidenced assertion that the hash-related security fixes done in 5.18.0 were both unnecessary and may have significantly slowed the perl interpreter.

        No. I offered the possibility that one of the differences visible from the scant information provided, was that the different hashing algorithm might be responsible for the slowdown. Eg, It might be that the OPs data invoked a pathological behaviour with the new algorithm, but not with the old.

        Given it is a simple recompile to verify one way or the other, why wouldn't he check.

        I also mentioned in passing that (IMO) the change of algorithm was unnecessary. No assertion; just my opinion. An opinion that you have said nothing to change.

        It's inflammatory because you've taken a in-passing expression of my opinion, and made a mountain out of a molehill. Deliberately inflaming a thread with a discussion that has no benefit to the OP nor merit to this place.

        That's utterly trivial. You send a HTTP request to the server with a bunch of headers or parameters containing the generated keys.

        You might just as well send a request that contains a 100 billion headers.

        Or you could supply some JSON to a server that processes JSON input.

        To the same perl process that supplied you with the headers. Hm.

        Think of a spam filter that reads an email's headers into a hash for one hypothetical example of many

        So how does the attacker persuade the spam filter to give him an unsorted set of hash keys in order that he can find the seed to generate the headers?

        The biggie was that if you supplied a suitable small set of keys in a request (no need to know the server's seed) you could force the perl process to allocate Gb's worth of memory.

        Care to supply me with a suitable set of keys such that I can test that assertion? Because outside of this (new to me) claim, you've still to present a realistic scenario for an exploit.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
        In the absence of evidence, opinion is indistinguishable from prejudice.