Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hello perlmonks

I was wondering if there is any perl extension for determining the edit transcript between two strings based on words. I know Text::EditTranscript but it is based on charachters.

Thank you in advance for your time

Replies are listed 'Best First'.
Re: Levenstein distance transcription
by Eily (Monsignor) on Dec 05, 2014 at 13:39 UTC

    It doesn't look like there is. If you have few unique words (by taking smaller portions of a bigger string if necessary) you could always replace each word by a char and run the character version of the algorithm on it:

    use v5.14; use Data::Dump qw/pp/; my @chars = ('0'..'9', 'a'..'z', 'A'..'Z'); # Up to scalar(@chars) different words (actually @chars+1 because of u +ndef, but that wouldn't help readability) $_ = <<STR; Jack and Jill went up the hill to fetch a pail of water Jack fell down and broke his crown and Jill came tumbling after STR my @words = /\w+/g; my %replace; my $asChars = join '', map { $replace{$_}//=shift(@chars) } @words; # +'defined-orcish manoeuver' :D # say pp \%replace; say $asChars; my %reverse = reverse %replace; say join ' ', map $reverse{$_}, split //, $asChars; __DATA__ 0123456789abc0de1fgh12ijk Jack and Jill went up the hill to fetch a pail of water Jack fell down + and broke his crown and Jill came tumbling after

    Edit: for the comparison to work, you have to use the same %replace hash for all strings. And the $h{$_}//=NewVal() idiom (Orcish Maneuver) means that any word that's already known will be replaced by the existing substitute, while an unknown word will add a new entry in the hash. Here I use // instead of || because otherwise '0' would be an invalid (false) character.

Re: Levenstein distance transcription
by BrowserUk (Patriarch) on Dec 05, 2014 at 15:58 UTC

    I've got to wonder why you want this?

    I normally try to resist asking this question, because I personally dislike having to explain my reasoning to others... but.

    There are so many different possibilities for each pair of strings, to want any given one seems nonsensical.

    For example: Given the length (in words) of any two strings, you can generate an "edit transcript" thus:

    my $shorter = min( $lenString1, $lenString2 ); my $longer = max( $lenString1, $lenString2 ); ## longer to shorter my $et = 'DI' x $shorter . 'D' x ( $longer - $shorter ); ## or shorter to longer my $et = 'DI' x $shorter . 'I' x ( $longer - $shorter );

    Or:

    ## longer to shorter my $et = 'D' x $longer . 'I' x $shorter; ## shorter to longer my $et = 'D' x $shorter . 'I' x $longer;

    As best I can tell, the only actually useful information you might glean from this, (done the full, laborious Levenstein way), is where the two sentences happen to contain the same word in the same position.

    But given that if you had two identical sentences with the exception that one of the had an extra word at the start, eg:

    'the quick brown fox' 'before the quick brown fox'

    The Levenstein algorithm would fail to detect any 'same word in same place' and produce a "edit transcript" of (something like) 'DIDIDIDII'; I'm failing to see a good use?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      The Levenshtein algorithm finds the shortest edit script. In your example this would be:

      use Algorithm::Diff qw(sdiff); use Data::Dumper; my $a=sdiff( [split(/\W+/,"the quick brown fox")], [split(/\W+/,"before the quick brown fox")] ); print Dumper($a); $VAR1 = [ [ '+', '', 'before' ], [ 'u', 'the', 'the' ], [ 'u', 'quick', 'quick' ], [ 'u', 'brown', 'brown' ], [ 'u', 'fox', 'fox' ] ];

      Algorithm::Diff (aka A::D) uses the Hunt-Szymansky-Algorithm which is an improved Levenshtein-Algorithm. The nice thing of A::D is the use of arrays as input, thus A::D can be used for everything, which can be represented as array of strings. Result can be edit-distance, edit-script, length of longest common substring (LLCS), longest common substring (LCS), and (global) alignment.

      A::D is reasonable fast, A::DXS is a lot faster, but my private versions of them have approximately double speed. Of course, String::Similarity, purely string based, only determining LLCS, implementing Meyer's algorithm in C is ten times faster.</>

Re: Levenstein distance transcription
by wollmers (Scribe) on Dec 05, 2014 at 21:13 UTC

    Algorithm::Diff does it perfectly, and is flexible in the format of the output. The input is also flexible, because the elements of the arrays can be characters, graphemes, words, sentences, lines, paragraphs, pages, ...

    See my example in the answer to BrowserUK

Re: Levenstein distance transcription
by blindluke (Hermit) on Dec 05, 2014 at 13:40 UTC

    It's a fairly popular algorhitm - you can find the implementation in Perl both on Rosetta code and on Wikibooks. For your convenience, I'm posting the copied Wikibooks version below:

    use List::Util qw(min); sub levenshtein { my ($str1, $str2) = @_; my @ar1 = split //, $str1; my @ar2 = split //, $str2; my @dist; $dist[$_][0] = $_ foreach (0 .. @ar1); $dist[0][$_] = $_ foreach (0 .. @ar2); foreach my $i (1 .. @ar1){ foreach my $j (1 .. @ar2){ my $cost = $ar1[$i - 1] eq $ar2[$j - 1] ? 0 : 1; $dist[$i][$j] = min( $dist[$i - 1][$j] + 1, $dist[$i][$j - 1] + 1, $dist[$i - 1][$j - 1] + $cost ); } } return $dist[@ar1][@ar2]; }

    Update: I'm sorry, I missed the "based on words" part of your question. Thankfully, Eily already answered your question, reading it more carefully than I did.

    - Luke

      This one is fairly easy to turn into a word based equivalent, you just have to replace split // with split /\W+/. But it just returns a distance, not the description of the differences as Text::EditTranscript does though.

      You shouldn't trust Rosetta code, or Wikipedia, or Wikibooks, nor pseudocode in algorithmic papers (even if peer reviewed), unless you tested it for some level of "complete".

      In my collection of implementations (C and Perl) and algorithm descriptions in the field around LCS/Align/ASM only 10 % are usable in the sense of reliable.