in reply to Cluster a big bunch of strings
While you start out with a large number of strings, you could certainly reduce the number of required comparisons by precomputing some metrics. For example, checking for equivalence is fairly cheap. As well, assuming significant variability in title length, if you pick a fixed metric of a Levenshtein distance of 3, two strings with lengths different by 4 or more could be dropped immediately.
I'm a little surprised that 1 trillion comparisons is computationally infeasible. Are you trying to do this dynamically? If so, it would seem your best bet would be caching results in a DB, thus reducing the problem to one large initial data crunch followed by a much smaller insertion operation for new additions.
My last thought is that probability of a typographical error is proportional to string length, so you may want to use a relative distance rather than an absolute one.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Cluster a big bunch of strings
by educated_foo (Vicar) on Mar 30, 2009 at 17:17 UTC | |
by kennethk (Abbot) on Mar 30, 2009 at 18:23 UTC |