in reply to tokenize plain text messages

If you don't find the Regexp::Common::Revdiablo :) module, this might give you a starting place. It's not tested much beyond what you see, and I think it could be simplified.

$s = 'This, is, an, example. Keep $2.50, 1,500, and 192.168.1.1.'; $re_revdiablo = qr[(?:[^\w\'\$!,.-]+|(?:(?<=\D)[.,])|(?:[.,](?=\D|$)) +)+]; print join ' | ', split $re_revdiablo, $s; This | is | an | example | Keep | $2.50 | 1,500 | and | 192.168.1.1

I tried to use the /x modifier to break up the density of the regex, but that doesn't seem to work with split?

Update: I'm talking crap. /x does work with split provided you don't put spaces between the \ and the character it is escaping. D'oh!

$re_revdiablo = qr[ (?: # group, no capture [^\w\'\$!,.-] # on anything not in your list | (?: (?<= \D ) [.,] ) # or . or, if preceded by a non nu +meric | (?: [.,] (?= \D | $) # or . or, if followed by a non nu +meric or EOL ) )+ # 1 or more ]x;

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

Replies are listed 'Best First'.
Re: Re: tokenize plain text messages
by revdiablo (Prior) on May 10, 2003 at 01:55 UTC

    That's a very nice... glad to see a way I can Do It All (tm) with one regex. The only problem is it benchmarks about 53% slower than my _w_strings sub. (Also, each one gives very slightly different results with a more complicated test message. ugh.)

    Here's how I used your regex in a subroutine:

    sub tokenize_msg_w_oneregex { my ($msg) = @_; my $re = qr{(?:[^\w\'\$!,.-]+|(?:(?<=\D)[.,])|(?:[.,](?=\D|$)))+} +; my %words = map {$_=>1} split $re, $msg; return keys %words; }

    And here's the cmpthese output:

    Benchmark: timing 10000 iterations of Lists, One Regex, Strings... Lists: 4 wallclock secs ( 4.15 usr + 0.00 sys = 4.15 CPU) @ 24 +09.64/s (n=10000) One Regex: 4 wallclock secs ( 3.56 usr + 0.00 sys = 3.56 CPU) @ 28 +08.99/s (n=10000) Strings: 2 wallclock secs ( 2.33 usr + 0.00 sys = 2.33 CPU) @ 42 +91.85/s (n=10000) Rate Lists One Regex Strings Lists 2410/s -- -14% -44% One Regex 2809/s 17% -- -35% Strings 4292/s 78% 53% --

      Try this version of your ....oneregex sub. (Phew! Long names:).

      D:\Perl\test>257026 192.168.1.1 | $2.50 | Keep | example | and | is | This | 1,500 | an Rate lists strings regex lists 757/s -- -56% -61% strings 1736/s 129% -- -10% regex 1930/s 155% 11% --

      As the regex never changes, there is no need to recompile it every time you call the sub, so I made it a constant. I would usually put the use constant.. inside the sub where it is used to keep it tidy, but I got beaten up because it implies that the constant is lexically scoped which it isn't. This doesn't fool me, but it is your choice.

      use constant RE_WORDS => qr[(?:[^\w\'\$!,.-]|(?:(?<=\D)[.,])|(?:[.,](? +=\D|$)))+]; sub tokenize_msg_w_oneregex { my %words; @words{ split RE_WORDS, shift } = (); return keys %words; }

      To be fair, a large part of the savings is avoiding the map and initialising every hash element to 1 when you will never use the value. Doing the split inside a hash slice avoids this. You can easily feed this saving back into your other subs which would probably make your strings sub quickest again. But I thought I'd leave that AAEFTR:)

        Your suspicions are confirmed. The optimization has sped my strings-based tokenizer up by 39%. Interestingly, with the expanded test string I'm using, my original slightly edged out the optimized version of your single-regex. Here are the results:

        Rate lists oneregex str_org str_opt lists 2347/s -- -38% -45% -60% oneregex 3807/s 62% -- -10% -35% str_org 4237/s 81% 11% -- -28% str_opt 5882/s 151% 55% 39% --

        This optimization is definitely effective. Thanks very much.

        Oh. And here's the expanded test string if you want to play with it:

        my $msg = q{This, is, an, example. Keep $2.50, 1,500, and 192.168.1.1. + I want to work this thing out a LITTEL!!!!L BITH!!!!! MORE@@@@@@ with some,.unhapp.yword,combinations.and , a little .. bit of,, confusing,text hopefully @#@#@#@%#$57)#$*(#&)(*$ it will @#@][] work.} +;