It concerns the hash function used in Perl's implementations.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Hash Clash on purpose
by l2kashe (Deacon) on Jun 02, 2003 at 20:08 UTC | |
Though it might be something closer to The problem described is causing the hashing function to work harder to produce the hashed string... degrading performance to O(n * n) time frame. But it assumes the attacker has a quasi direct pipe to the hashing functions input. There are situations where it makes sense to track data collected in a hash (letter/number counting comes to mind off the top of my head), but the type of pre constructed data that is required for this attack to be successful raises other questions in my mind. If this is something other than trivial tracking of data, and is infact some larger piece of production code we are talking about then you don't trust anything coming in from outside, right?? I would be interested to hear what other monks think after reading the paper, personally it would be interesting to hear from individuals who actually work on the Perl codebase, and what they think of this situation. Also I didn't see any code to reproduce this, but I also didn't look too hard. I think I may revist this now though. MMMMM... Chocolaty Perl Goodness..... | [reply] [d/l] [select] |
by iburrell (Chaplain) on Jun 02, 2003 at 21:29 UTC | |
There is one very common module that accepts input from the outside and inserts strings into a hash table: CGI.pm. It stores the parameter names as keys in a hash table. As an experiment, I took the 10,000 strings from the paper, turned them into a query string of about 200 KB, and POST it to a CGI script. The CGI process ran for 3 minutes at 99% CPU before it finished parsing. Lots of CGI scripts don't have any POST limits for forms. Even those that do, lots of strings can be passed in the sizes I have seen used (1-10 MB). POST limits are usually designed to prevent file uploads, not pathological queries. | [reply] |
by l2kashe (Deacon) on Jun 02, 2003 at 22:38 UTC | |
On another note though, aren't the hash keys in a form and such predefined? Thinking this through, it would take a maliciously crafted URL to actually exploit a backend perl based CGI, as opposed to going through the "normal" processing channel, as in this exploitation its really the keys that are the issue as opposed to the values. Since we are talking about DOS attacks on systems, this is a valid attack. It will most certainly be very easy to track via some simple log searching without some proxying involved, but thats kinda outside of the scope of the thread... New question: Is there a way to "fix" CGI.pm? Thinking it through it feels knee jerkish to point at CGI, and I think might be outside of the scope of the modules varied duties. Though it would be interesting if you could code something to the effect of... Which would only allow foo, bar, and baz, and silently drop everything else from the URL. But that kind of breaks the standalone nature of many cgi programs, and would need to be checked for/cleaned prior to actually parsing the parameters. On a plus side this could lead to many many more sights being far more secure than they presently are as its one more obstacle to hurdle in the never ending hunt for other people's processing cycles. Kinda like 'use strict' for CGI :) Im still hoping to get feedback from more "senior" members of the community, to get a handle on what they really think of the issue. Also does/will this effect Perl6? MMMMM... Chocolaty Perl Goodness..... | [reply] [d/l] |
by iburrell (Chaplain) on Jun 02, 2003 at 22:57 UTC | |
by mr_mischief (Monsignor) on Jun 03, 2003 at 02:52 UTC | |
by Anonymous Monk on Jun 04, 2003 at 08:20 UTC | |
by Anonymous Monk on Jun 02, 2003 at 23:15 UTC | |
Thanks for the experimental result; I'll note this thread on the webpage. Also remember that the behavior is quadratic. 20,000 strings, 500kbyte, would take 12 minutes. 100,000 strings would run for 5 hours. Scott Crosby | [reply] |
by iburrell (Chaplain) on Jun 03, 2003 at 01:03 UTC | |
|
Re: Hash Clash on purpose
by BrowserUk (Patriarch) on Jun 03, 2003 at 04:08 UTC | |
Currently, circa. 5.8.0, the hashing value for a new hash is initialised to 0. Would initialising this to a random 32-bit value chosen fresh for each hash at runtime, largely elleviate if not entirely suppress the possibility of a hash_clash attack against a perl hash-based system? Whilst it would still be possible to compute 48 generators that could be combined as described in the article for any given initialisation of the hash, computing them for all possible 2**31 possible initialisations would be considerably more expensive. Finding a way of determining which initialisation had been randomly chosen for any given hash, of any given start of any given program on any given system renders choosing the right set of 48 generators almost impossible? Or am I missing something obvious again? Update:The change I was proposing would affect the following code from hv.c:59 (as of 5.8.0).
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller | [reply] [d/l] |
by tilly (Archbishop) on Jun 03, 2003 at 16:08 UTC | |
However assuming that the analysis shows that changing the initialization value does change hashing decisions, your pseudo-code looks wrong to me. You are initializing it randomly per hash lookup. For hashing to work, the hash lookup algorithm has to be consistent from lookup to lookup. Instead what you need to do is save the value of the initial value somewhere and then pull that into hash_PeRlHaSh. That means that you have to store that somewhere. Several options exist. One is to reserve space per hash to store its initialization value, and then look that up per lookup. Another is to have a global value chosen for all of your hashes. And a third is to make it a random compile-time constant. Problems with binary linking of XS modules that have been copied from point A to B make the last one infeasible. The first one adds 4 bytes to every hash, which isn't that much, but we have a lot of small hashes. An offhand guess is that we would see 1-5% space usage increase, and (of course) binary incompatibility. The middle option (a run-time random constant) looks to be the best bet. p5p might have some arguments over binary compatibility (code compiled with hash lookups initialized to 0 won't match code compiled with your new initialization) but it should be easy to have whether to initialize randomly or to 0 at startup to be a compile-time flag. Hmmm...looks like I argued myself into believing that you can fix the problem with your approach. It would be worthwhile for you to try to make the fix, run tests to verify that it does make the performance attack infeasible, then try to get it accepted... :-) | [reply] |
by BrowserUk (Patriarch) on Jun 03, 2003 at 17:58 UTC | |
Agreed. My pseudo-code wasn't up to much. However, I just remembered/re-discovered an interesting thing. In common with CV's, SV's and AV's, the HV structure has an NVX field. This is the 32-bit field used by SV's to store the numeric value of SV's once they have used in a numeric context. In the HV, it is unused. It seems to me that this would be the perfect place to store a (hash) creation time generated random value that could be used to initialise the hash function. That would effectively allow for a different hashing function for every hash within a program, and for every run, with no extra space required, and only hits would be:
The only real argument against the idea that I forsee is that it might be possible that an application could randomly hit worse case performance. Statistically, the odds of this happening are, I think much the same as now for any given dataset, but the possibility that an application that loaded that same hash with the same data every run, could randomly have that hash degenerate to worst case, could be seen as a problem. I really wish I was set up to be able to build, test and propose this fix to p5p, but I'm not, and am unlikely to be in the near future. Maybe Elian or somebody can pass the idea along if they think it has some merit. Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller | [reply] |
|
Re: Hash Clash on purpose
by gjb (Vicar) on Jun 02, 2003 at 19:48 UTC | |
You might want to have a look at hash collision DOS which mentions this article and happens to be in the Weekly Best. Just my 2 cents, -gjb- | [reply] |