in reply to Quicker Array Searching

I have to say that I seriously doubt that Text::Levenshtein is the right tool for this job.

The problems are

You might get somewhere by applying the Levenshtein algorithm, but using the words (or their index in a lookup table) instead of characters, case-folded, excluding whitespace and markup, but you would still be fooled by simple typos and such.

The Levenshtein algorithm is all about comparative positions of similar elements. The results of any one comparision are not themselves comparable with any other.

It would be an interesting exercise to devise an algorithm that would not only compare bodies of text for similarity, but also produce a metric for each peice of text that could be compared directly with that from another.

It might work by producing a list of indices for the words (sans whitespace, markup, case etc) in a table. Then use the word indices to look up a prime number.

If 'A' was the first word in your table, it's index would be 0, that would then map to the prime 2. If 'an' was the second, that would be mapped to the prime 3, and so on.

You then multiply the prime for each word by the frequency of its occurance and sum the results.

You end up with a (huge) number that reflects both the words found and their frequencies in a semi unique way. You've thrown away 'extraneous' information, like case, whitespace etc, but also relative position. It still doesn't handle typos or foriegn languages, and it doesn't in anyway reflect the meaning of the text. The results might be interesting though.

It might be worth looking up expertise and algorithms for plagerism detection, though again, I think that these will tend to only detect similarity between two bodies of text, not produce a value by which more than two bodies can be cross related, nor scores that can themselves can be directly compared.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
Hooray!