zhe has asked for the wisdom of the Perl Monks concerning the following question:
This function would return all the longest consecutive words between these two sentnces. The problem is right now I have two lists of over 4000 sentnces to compare, while using this function is extremly slow. Is there a way to accelerate this function? thanks in advance. Zhesub lc_substr { my ($s, $t) = @_; my $z = 0; my @S = (undef, split(/ /, $s)); my @T = (undef, split(/ /, $t)); my $m = scalar @S; my $n = scalar @T; my @LCS = (); my @start = (); my @end = (); for my $i ( 1 .. $m ) { for my $j ( 1 .. $n ) { if ($S[$i] =~ m/^$T[$j]$/ig && $S[$i] ne undef && $T[$j] ne unde +f &&$S[$i] ne " "&&$T[$j] ne " ") { $L[$i-1][$j-1] ||= 0; $L[$i][$j] = $L[$i-1][$j-1] + 1; if ($L[$i][$j] > $z) { $z = $L[$i][$j]; @ret = (); @start = (); @end = (); @prod = (); } if ($L[$i][$j] == $z) { $start = $i-$z+1; $end = $i; push(@start,$start); push(@end,$end); push(@prod,$t); } } } } $num = scalar @start; for ($n = 0; $n < $num;$n++){ if($prod[$n] ne ""){ $str = $prod[$n]." "."[".$start[$n].",".$end[$n]."]"; push (@LCS, $str); } } return @LCS; }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: LCS efficiency problem
by ikegami (Patriarch) on Jun 05, 2008 at 06:34 UTC | |
Pardon my abruptness in advance. There's a lot of ground to cover. Those are bugs to fix, not performance enhancements (except the tip to make everything lowercase). I didn't even try running the program, Update: Changed @s[$start .. $end-$start+1] to @S[$start..$end] Update: Added off-by-one error. | [reply] [d/l] [select] |
by BrowserUk (Patriarch) on Jun 05, 2008 at 06:44 UTC | |
split(/\s+/, $s) is surely better than split(/ /, $s). Why? Are you forgetting that split / /, $s is magical? Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
by ikegami (Patriarch) on Jun 05, 2008 at 07:46 UTC | |
split / / isn't magical. You're thinking of split ' '.
split ' ' is slightly better than split /\s+/, but like I said, I wouldn't use it anyway. | [reply] [d/l] [select] |
by lima1 (Curate) on Jun 06, 2008 at 07:13 UTC | |
should fix the wordlists (so that eq is enough). Further filtering as suggested below is a good idea, do that! | [reply] [d/l] |
by ikegami (Patriarch) on Jun 06, 2008 at 18:31 UTC | |
Oops, missed the undef. But note that starting at one has nothing to do with the dynamic programming technique. If you wanted to start at zero (say if @S and @T are inputs to the function), then just replace with
| [reply] [d/l] [select] |
|
Re: LCS efficiency problem
by BrowserUk (Patriarch) on Jun 06, 2008 at 06:43 UTC | |
How long is it taking now? And what is your target time? The best way of speeding this up, is to use a better algorithm.Naively, discovering the longest common word sequence between two sentences "a b c d" & "p q r" (with letters representing arbitrary words and assuming that a minimum LCWS of 2 words is required), means that you test if "p q r" is found in "a b c d", if not, then "p q", then "q r". If you search the longer string for word subsequences of the shorter, if the shorter string is 3 words, you need to do 3 searches. 4 words, 6 searches. 5 words, 10. Ie. nCr where n is the number of words in the shorter sentence and r=2. So by the time your shorter sentence has 10 words you need 45 string searches, and by 20, 190 searches. But many of these subsequences of the shorter string simply cannot exist in the longer string, because they contain individual words that do not exist there. And vice versa. You are comparing against words in the longer string that do not exist in the shorter. So, given sentences (again letters represent words), "A B C D E F G H I" & "P Q C D E R S A B", then you would be comparing each of these 36 subsequences from the first string against the second:
You can save a considerable number of these searchesby breaking each of the string into contiguous fragments that contain only words that exist in the other string, and then performing the LCWS on those fragments. Eg.
The work required to find the LCWS "C D E" is considerably less. Realising that such pat examples are not too convincing, here's a more realistic example. Two sanitised sentences drawn from Huckleberry Finn:
The shorter of these two sentences contains 53 words. (Don'tcha just love Mark Twain's run-on sentences :). So, the naive approach would require searching the longer sentences 123 words, for all 1378 subsequences of the shorter. Naive solution work factorAs a rough measure of the effort/time involved in the above, if we multiply the number of words being searched 123, by the number of words in each of the 1378 subsequences, and sum the results, we get a 'work factor' of: 123*53 + 123*52*2 + 123*51*3 + 123*50*4 ... + 123*3*51 + 123*2*52 = 26182 * 123 = 3,220,386. But, if you fragment those two sentences as above, you get the following two sets of fragments:
Better algorithm work factorIf you now compare each of the A fragments against each of the B fragments, and perform similar polynomial calculations for each pairing:
Giving a total work factor of 3218. Or approx. 0.01% of the work. Not totally accurate obviously as it doesn't take into account the effort of fragmenting the strings, but a considerable potential savings anyway. Futher savingsAnd it doesn't stop there. If you sort the fragments in each set, longest first, then you are most likely to discover the LCWS in the early sets and can skip testing when any fragment is shorter than the best you found so far. Worked exampleThe upshot of this is that I ran my algorithm against the 2533 sentences extracted from the Project Gutenberg version of "The Adventures of Huckleberry Finn by Mark Twain" and it found the 189,983 2-word or more LCWSs that result from the Note:The math above is unchecked and may contain errors. The timings and results are correct. Here is a small sample (40 longest) of the LCWSs found:
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
by ikegami (Patriarch) on Jun 07, 2008 at 08:42 UTC | |
I don't understand your numbers. LCSS is O(N*M), so Time for lcss comparing the two complete sentences
Time for lcss comparing the powerset of segments
| [reply] |
by BrowserUk (Patriarch) on Jun 07, 2008 at 17:14 UTC | |
When comparing the sentence, "P Q R" (letters represent words) against "A B C D", you need to test the powerset of the short against the longer. Eg. That gives 4*3 + 4*2 + 4*2 = 28 (not 4*3=12). Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
by ikegami (Patriarch) on Jun 07, 2008 at 23:18 UTC | |
by BrowserUk (Patriarch) on Jun 07, 2008 at 23:37 UTC | |
| |
by zhe (Initiate) on Jun 10, 2008 at 07:04 UTC | |
| [reply] |
by BrowserUk (Patriarch) on Jun 10, 2008 at 18:51 UTC | |
Certainly. I've included my test code at the bottom of this post. And I'll attempt a brief explanation of it. The input to my test code is a file of preprocessed sentences, one per line, with all extraneous characters removed. It reads all the lines into memory, breaking them into arrays of words:
It then builds a parallel array of hashes of uniq words in each sentence:
This is done up-front, as each sentence, and associated hash of uniqs is re-used many times during the run of the code. The main body of the program consists of two nested loops over the indices of the sentences (and associated uniqs hashes), comparing each sentence against every other sentence in the usual way:
The variables $sa $sb & $ua $ub are references to the sentence arrays and uniq hashes for A & B respectively, and just simplify expressions in the subsequent processing. The variables @aFragments & @bFragments are produced by calling fragmentSentence() passing the sentence arrayref of one sentence and the uniq hashref for the other, for each pairing.
Works by using reduce to build an array of arrays of contiguous words in sentence A that also appear in sentence B. And vice versa. The central core of the processing then runs a standard LCS algorithm on each of the resultant fragments of sentence A against each of the fragments from sentence B, but taking the opportunity of an early exit at several points when it is obvious that no longer common sequence can be found than has already been seen:
Let me know if anything needs clarifying, and also, if it helps your problem. The code: Read more... (3 kB)
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
|
Re: LCS efficiency problem
by Limbic~Region (Chancellor) on Jun 05, 2008 at 22:24 UTC | |
Are you looking for the longest common substring (word level) You might be interested in Finding a LCS module on word level. Unfortunately, without knowing which task you are trying to accomplish, I can't give any more advice. Cheers - L~R | [reply] |
|
Re: LCS efficiency problem
by graff (Chancellor) on Jun 06, 2008 at 22:53 UTC | |
The version below includes its own small test set, but also accepts two or more file names as command line args, and spits out the single (first) LCS string found for each pairing of input files. It will report an "LCS" of a single "word" whenever that happens to be the longest common string, and it will report "NO_MATCH" when two inputs have nothing at all in common. Note that the "lcs_arrays" sub assumes that its inputs have already been arranged into arrays according to whatever tokenization strategy is appropriate to a given task. (This means you could use it for character-based comparisons as well as word-based, but there are already other modules available for doing that sort of work.) I think separating the tokenization from the LCS algorithm is a useful thing. In this example, the "main" part of my test script simply splits on whitespace, but you might want to do that part differently, e.g. removing punctuation characters (brackets, commas, periods, quotes, etc) from the left and right edges of each word, folding case, and/or other stuff like that. UPDATE: Why aren't you using the "LCS" function of Algorithm::Diff? It works on arrays, just like the toy function I've posted here, so it's just a matter of how you populate the arrays. Of course, having just played with that a bit, I see now that I might be confused about the "proper" definition of the term "longest common substring". Given two input lists: Algorithm::Diff::LCS will return "two three five six seven", whereas my toy function above will return just "two three five". I presume you know what you mean by "LCS", but you should be careful of what other people might mean by it (esp. if they, like me, might be confused about what the "proper" definition should be). | [reply] [d/l] [select] |
by ikegami (Patriarch) on Jun 07, 2008 at 09:01 UTC | |
Or since you're only returning one of the longest,
Update: Tested. Added fixes. | [reply] [d/l] [select] |
by BrowserUk (Patriarch) on Jun 07, 2008 at 01:01 UTC | |
I tried it on the sample data posted above by BrowserUK, with the single answer LCS word string returned in very little time (consistently less than 0.02 sec Cool. Now think about doing all the 3,206,778 pairing of 2533 sentences. 3,206,778 * 0.02 = 64135.56 seconds = 17 hrs 49 minutes. That's why I was somewhat impressed with my "few seconds under 10 minutes". If you would like the 2533 sentences I extracted from Huckleberry Finn in order to perform a real comparison, /msg me an email id and I'll forward it to you. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
by ikegami (Patriarch) on Jun 07, 2008 at 06:10 UTC | |
| [reply] |
|
Re: LCS efficiency problem
by throop (Chaplain) on Jun 08, 2008 at 02:41 UTC | |
I'm not up to coding a better answer. But I suspect that the answers so far could be speeded up further by using the 'jump tables' trick in the Boyer Moore String Search algorithm. | [reply] |
by zhe (Initiate) on Jun 10, 2008 at 06:58 UTC | |
I have two files,each of them is sentences.For example file1 has sentences: A C D E G F D C B F D E A A A C F D B A file2 has sentences: A C D F D B F D D the result would be for each of the sentence in file 1 matching each of the sentence in file 2, return the longest common substring and their index on a word level between the two sentences. RESULT: for sentence 1 in file 1 LCS :A C D0,3 F D5,6 F D5,6 Original Sentence:A C D E G F D sentence 2 in file 1 LCS :A5,5 C0,0 D3,3 F D3,4 F D 3,4 Original Sentence:C B F D E A sentence 3 in file 1 LCS :A C1,2 F D B3,5 F D3,4 Original Sentence:A A C F D B A So is there any way to make this process more efficiency? | [reply] |