I have to say that I seriously doubt that Text::Levenshtein is the right tool for this job.
The problems are
The numerical results of comparing A <-> B, and A <-> C, are not themselves comparable in any meaningful fashion. Even comparing two identical phrases with different amounts of whitespace between the words, or different case or just capitalisation, will result in differing scores, even though the human eye would perceive them as being similar.
Simple transpositions of characters in a word, words in a sentance, or whole sentances will cause dramatic differences to be scored.
print distance( 'the quick brown fox jumps over the lazy dog', 'jumps +over the lazy dog the quick brown fox jumps' ); # print '42';
There are an infinite number of completely unrelated pairs of sentances -- they could be on a different subject, the same subject using different words, the same words mispelt, or in an entirely different language -- that would produce a "match" for this score of 42.
You might get somewhere by applying the Levenshtein algorithm, but using the words (or their index in a lookup table) instead of characters, case-folded, excluding whitespace and markup, but you would still be fooled by simple typos and such.
The Levenshtein algorithm is all about comparative positions of similar elements. The results of any one comparision are not themselves comparable with any other.
It would be an interesting exercise to devise an algorithm that would not only compare bodies of text for similarity, but also produce a metric for each peice of text that could be compared directly with that from another.
It might work by producing a list of indices for the words (sans whitespace, markup, case etc) in a table. Then use the word indices to look up a prime number.
If 'A' was the first word in your table, it's index would be 0, that would then map to the prime 2. If 'an' was the second, that would be mapped to the prime 3, and so on.
You then multiply the prime for each word by the frequency of its occurance and sum the results.
You end up with a (huge) number that reflects both the words found and their frequencies in a semi unique way. You've thrown away 'extraneous' information, like case, whitespace etc, but also relative position. It still doesn't handle typos or foriegn languages, and it doesn't in anyway reflect the meaning of the text. The results might be interesting though.
It might be worth looking up expertise and algorithms for plagerism detection, though again, I think that these will tend to only detect similarity between two bodies of text, not produce a value by which more than two bodies can be cross related, nor scores that can themselves can be directly compared.
In reply to Re: Quicker Array Searching
by BrowserUk
in thread Quicker Array Searching
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |