in reply to Re: Perl Hashes in C?
in thread Perl Hashes in C?
These are TrueColor. I am dealing with 281 TRILLION, full 16 bits/channel, 48 bits/pixel.
"Found 81814 colors in 131072000 pixels" -> 1602 Pixels per color.
The 216MB Photoshop RAW/16 file had 27 MILLION unique colors out of 36M "Pixels=36152321, unique Colors=27546248=76.19%"
76% of the pixels have unique colors! This makes your hashing algorithm rehash everything when it lands on a dup.
I am monkeying with the MAX_UNSORTED parameter which determines when a sort has to be done after so many new, random colors have been piled on top of the lookup table.
I had it set at a way, way too low 200. I wrote a Perl script to run the C program with varying MAX_UNSORT numbers and are seeing vastly better performance with 3805 is the best so far. The linear searches on top of the pile are pretty cheap compared to QSorting and merging.
With a 1 in 3 sampling (12M of 36M), I have it down to < 46 seconds with 88.55% unique colors
The larger the number of unique colors, the more it pays to leave a pile of unsorted colors on top.
The one I did before was a ColorMAtch colorspace and it had ~76% unique colors. This one is ProPhoto and is over 85%! Same NEF file, same ACR settings, no photoshop other than to import from ACR and save as RAW.
It looks like I need to work on the Sort_Merge. QSort on the entire 27 million tall stack, 99% already sorted was taking 98% of the program time. The shuffle_merge is 100 times faster on this problem
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Perl Hashes in C?
by BrowserUk (Patriarch) on Aug 12, 2015 at 05:36 UTC | |
The 216MB Photoshop RAW/16 file had 27 MILLION unique colors out of 36M This is a perl creating a hash with 27 million keys:
53 seconds! 76% of the pixels have unique colors! This makes your hashing algorithm rehash everything when it lands on a dup. Sorry, but if you mean "rehash every preexisting key", you are wrong. Why do you believe that? (If you mean something else by the highlighted phrase above, you're gonna have to explain yourself better.) The beauty of hashing for this application is that it doesn't matter what the range is, only the actual total. For each pixel in your image you either need to add a new key; or increment an existing value. Either takes approximately the same amount of time: circa: 0.000000457763671875 of a second on my rather ancient hardware. Indexing your 48-bit values (as opposed to my 32-bit ones) will take ~50% longer; so perhaps 40 seconds to count the colours in my 125 mega-pixel image. I have it down to < 46 seconds with 88.55% unique colors If you've already trimmed your OP time of "2.17 hours" to 48 seconds, why have you wasted our time by asking this question? Another monk that I won't waste my time reading and producing solutions for in future. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.
I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!
| [reply] [d/l] |
by Anonymous Monk on Aug 13, 2015 at 01:26 UTC | |
This is the trivial hash, guaranteed to never have a collision with sequential integers. And it has no data I wrote a test script to see how fast the suggested Perl hashing is with Big Data and millions of collisions: No snide comments about my backwoodsy writing style, please :)
This thing is 3 times as fast as the C program I wrote.
I has to be the hashing function calculating an address. The C program, bless its little heart, had to do a Binary Search over an ever growing lookup table. There was one severe problem with getting the 48bit hash key right. I packed 3 verified unsigned short into a "Q" The R, G and B printed perfectly and agreed with the Bitshift/AND value and the QUAD was always zero: It's kind of hard to see, buy they all agree except for the I followed written documentation: From: >> http://perldoc.perl.org/functions/pack.html I had to do a bitshift, <<16 for the GREEN and <<32 for the BLUE and logically AND them together to get a UINT24_T that worked. The performance is MOST IMPRESSIVE!! Thanks for the Pointers (References?
| [reply] [d/l] [select] |
by BrowserUk (Patriarch) on Aug 13, 2015 at 05:08 UTC | |
FWIW, this constructs a 216 MB file of random bytes in memory; then indexes it is 6-byte chunks to find 28 million "colors" in 37 million pixels in 56 seconds (and I'm guessing your machine is a lot faster than mine):
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.
I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!
| [reply] [d/l] |
by BrowserUk (Patriarch) on Aug 13, 2015 at 01:40 UTC | |
The performance is MOST IMPRESSIVE!! Thanks for the Pointers (References? You're welcome. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.
I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!
| [reply] |
by Anonymous Monk on Aug 13, 2015 at 01:58 UTC | |
>Sorry, but if you mean "rehash every preexisting key", you are wrong. Why do you believe that? As I recall, when a hash algorithm is selected, there is a tradeoff between performance and probability of uniqueness. Some fraction of the data up to and including the entire key may be used in the hash key. If all of the distinct keys tried actually give non-colliding hash values, then the tradeoff worked. Otherwise, another algorithm must be selected which uses either more of the key or a different algorithm. That triggers a total recalc. No? Not even the Perl Gurus can know in advance that your data would have 8000 distinct colors and that mine would have 27 million. And, hashing 48 bit quantomly random data has to be much more than 2.0 times as hard on a hashing algorithm as 24 bit data. There are 3300 times more buckets to keep track of. Working in 16E6 color space does not in any way seem like it should be half as hard as 281474976710656 color space. What I was seeing in my ill-fated C attempt was dramatically longer run times with modest increases in data volume from the ever expanding looup table. That is why I was looking for a hashing formula!
>> If you've already trimmed your OP time of "2.17 hours" to 48 seconds, why have you wasted our time When I asked the question, the run-time was hours. Way beyond my nano-scale attention span. While The Monks were busy writing up many questions, I was beaverishly instrumenting my code to find out where all the time was being squandered. Qsort was taking 98% of the time! Replacing <dumb old> QSort with a brilliantly conceived "Shuffle Merge" (TM :) and increasing my MAX_UNSORTED value from 200 (D'oh!) to a more workable 7920, I was able to realize the astonishingly better run time of roughly a minute. 120 X Faster? Dang!
Make your first draft the klunkiest, cloddiest, most horrendously heinous code possible because then you have no place to go but UP!
| [reply] |
by BrowserUk (Patriarch) on Aug 13, 2015 at 02:09 UTC | |
>Sorry, but if you mean "rehash every preexisting key", you are 05 wrong. Why do you believe that? 06 07 As I recall, when a hash algorithm is selected, there is a tradeoff 08 between performance and probability of uniqueness. Some fraction of 09 the data up to and including the entire key may be used in the hash 10 key. 11 12 If all of the distinct keys tried actually give non-colliding hash 13 values, then the tradeoff worked. Otherwise, another algorithm 14 must be selected which uses either more of the key or a different 15 algorithm. Perl uses the same hashing algorithm for all hashes regardless of their content; and never changes it during the life of a hash. Collisions are dealt with using bucket chains; when the fill ratio reaches a certain level (75% I think), it creates a new hash double the size of the existing one and moves the existing key/values pairs to that new hash; but it doesn't need to recalculate hash values because these are stored (the full 32-bit calculated value) in the datastructure with the keys; so to find the keys position in the new, bigger hash it has only to re-mask that value to give an index into the array of pointers that is the basis of the hash structure and then copy the pointer over. No rehashing is needed. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.
I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!
| [reply] |
by Anonymous Monk on Aug 13, 2015 at 03:29 UTC | |
>> 76% of the pixels have unique colors! This makes your hashing algorithm rehash everything when it lands on a dup. >Sorry, but if you mean "rehash every preexisting key", you are wrong. Why do you believe that? As I recall, when a hash algorithm is selected, there is a tradeoff between performance and probability of uniqueness. Some fraction of the data up to and including the entire key may be used in the hash key. If all of the distinct keys tried actually give non-colliding hash values, then the tradeoff worked. Otherwise, another algorithm must be selected which uses either more of the key or a different algorithm. That triggers a total recalc. No? Not even the Perl Gurus can know in advance that your data would have 8000 distinct colors and that mine would have 27 million. And, hashing 48 bit quantomly random data has to be much more than 2.0 times as hard on a hashing algorithm as 24 bit data. There are 3300 times more buckets to keep track of. Working in 16E6 color space does not in any way seem like it should be half as hard as 281474976710656 color space. What I was seeing in my ill-fated C attempt was dramatically longer run times with modest increases in data volume from the ever expanding looup table. That is why I was looking for a hashing formula! >> If you've already trimmed your OP time of "2.17 hours" to 48 seconds, why have you wasted our time When I asked the question, the run-time was hours. Way beyond my nano-scale attention span. While The Monks were busy writing up many questions, I was beaverishly instrumenting my code to find out where all the time was being squandered. Qsort was taking 98% of the time! Replacing <dumb old> QSort with a brilliantly conceived "Shuffle Merge" (TM :) and increasing my MAX_UNSORTED value from 200 (D'oh!) to a more workable 7920, I was able to realize the astonishingly better run time of roughly a minute. 120 X Faster? Dang! Make your first draft the klunkiest, cloddiest, most horrendously heinous code possible because then you have no place to go but UP!
| [reply] |
by Anonymous Monk on Aug 12, 2015 at 15:13 UTC | |
| [reply] |
by BrowserUk (Patriarch) on Aug 12, 2015 at 15:29 UTC | |
Which comments and why? If you want to impose your views and sensibilities upon me; then at least have the balls to explain yourself properly, even if you are too much of a coward to do so under your attributable name/handle. | [reply] |
| |