L("XXYY") = ( L("XX") + L("YY") ) % 10
for all even length substrings XX and YY
=> a divide and conquer could be applied
=> computation complexity replaced with memory complexity.
> The OPed question posited 15 digits; that's a really long string.
One would do 3 lookups, leading zeros of the left most chunk shouldn't influence the checksum.
L("XX") = L("0XX") = L("00XX") ) = ...
> where do the 1 million entries come from?
6 digits sound like a good compromise, you'd need even amount and 100 Meg for 8 digits sound exaggerated. (A Perl array has an factor 10 overhead IIRC)
Anyway a C algorithm which fits into the line cache of the processor might still be faster.
Not sure if a 10k string for a 4 digit partition would always fit into the L1 cache.
the best approach might be a C program which dynamically benchmarks different chunk sizes to test the effect of line caches before building the lookup table.
Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery
FootballPerl is like chess, only without the dice
In reply to Re^4: Faster Luhn Check Digit Calculation?
by LanX
in thread Faster Luhn Check Digit Calculation?
by kschwab
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |