in reply to Re: Re: Re: tokenize plain text messages
in thread tokenize plain text messages

Your suspicions are confirmed. The optimization has sped my strings-based tokenizer up by 39%. Interestingly, with the expanded test string I'm using, my original slightly edged out the optimized version of your single-regex. Here are the results:

Rate lists oneregex str_org str_opt lists 2347/s -- -38% -45% -60% oneregex 3807/s 62% -- -10% -35% str_org 4237/s 81% 11% -- -28% str_opt 5882/s 151% 55% 39% --

This optimization is definitely effective. Thanks very much.

Oh. And here's the expanded test string if you want to play with it:

my $msg = q{This, is, an, example. Keep $2.50, 1,500, and 192.168.1.1. + I want to work this thing out a LITTEL!!!!L BITH!!!!! MORE@@@@@@ with some,.unhapp.yword,combinations.and , a little .. bit of,, confusing,text hopefully @#@#@#@%#$57)#$*(#&)(*$ it will @#@][] work.} +;

Replies are listed 'Best First'.
Re: Re: Re: Re: Re: tokenize plain text messages
by BrowserUk (Patriarch) on May 10, 2003 at 06:03 UTC

    A couple of things I notice once I played with the expanded testcase. Your strings version is capturing single spaces somewhere. And both your strings and my regex are letting single $'s through as well.

    I have a m/(..)/g version which avoids these two problems and is quicker than my split attempt, but still not as quick as your strings in it's current form.

    our $RE_WORDS = qr[ (?: \$? \d+ (?:[.,] \d+ )*)+ | [\w\'!-]+ ]xo; sub tokenize_msg_w_m { my %words; @words{ shift =~ m[($RE_WORDS)]og } = (); return keys %words; }

    Results

    D:\Perl\test>257026 Regex: $ | want | 192.168.1.1 | $57 | confusing | yword | hopefully | combinations | LITTEL!!!!L | a | of | bit | is | This | to | will | text | this | 1,500 | Keep | out | BITH!!!!! | it | example | work | unhapp | $2.50 | little | MORE | I | some | thing | with | and | an Strings: $ | | want | 192.168.1.1 | $57 | confusing | yword | hopefully | combinations | LITTEL!!!!L | a | of | bit | is | This | to | will | text | this | 1,500 | Keep | out | BITH!!!!! | it | example | work | unhapp | $2.50 | little | MORE | I | some | thing | with | and | an Match: want | 192.168.1.1 | $57 | confusing | yword | hopefully | combinations | LITTEL!!!!L | a | of | bit | is | This | to | will | text | this | 1,500 | Keep | out | BITH!!!!! | it | example | work | unhapp | $2.50 | little | MORE | I | some | thing | with | and | an Rate regex match strings regex 526/s -- -13% -26% match 607/s 15% -- -15% strings 710/s 35% 17% --

    The other question that crossed my mind was, what happens if the text contains "Do 33% of people watch T.V.?" ?


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

      The lonely $ is actually a legitimate token, and it's from near the end of the test string: ...&)(*$ it will...

      The empty tokens (they're not really spaces, but actually empty) are an artifact of the now-let's-delete-the-commas series of regex. It's due to a lone , or . surrounded by non-constituent chars, which initially gets considered as a token, but then deleted (leaving the temporary token separator). I suppose I could add another regex to s/%+/%/g, or just grep {!/^$/} keys %words..... or something.

      Here's the string shortly before being split on the temporary token boundaries; this is how I discovered where the anomalies were coming from.

      This%is%an%example%Keep%$2.50%1,500%and%192.168.1.1%I%want%to%work%thi +s%thing%out%a%LITTEL!!!!L%BITH!!!!!%MORE%with%some%%unhapp%yword%comb +inations%and%%a%little%%bit%of%%confusing%text%hopefully%$57%$%$%it%w +ill%work

      As for the TV example, that'd be tokenized just as it seems: Do|33|of|people|watch|T|V. This may seem slightly wrong, but I'm not trying to be perfect here. The digit-surrounded comma and period has already given me enough trouble. Gotta draw the line somewhere. :)