in reply to Re: Re: Re: Re: Re: tokenize plain text messages
in thread tokenize plain text messages
The lonely $ is actually a legitimate token, and it's from near the end of the test string: ...&)(*$ it will...
The empty tokens (they're not really spaces, but actually empty) are an artifact of the now-let's-delete-the-commas series of regex. It's due to a lone , or . surrounded by non-constituent chars, which initially gets considered as a token, but then deleted (leaving the temporary token separator). I suppose I could add another regex to s/%+/%/g, or just grep {!/^$/} keys %words..... or something.
Here's the string shortly before being split on the temporary token boundaries; this is how I discovered where the anomalies were coming from.
This%is%an%example%Keep%$2.50%1,500%and%192.168.1.1%I%want%to%work%thi +s%thing%out%a%LITTEL!!!!L%BITH!!!!!%MORE%with%some%%unhapp%yword%comb +inations%and%%a%little%%bit%of%%confusing%text%hopefully%$57%$%$%it%w +ill%work
As for the TV example, that'd be tokenized just as it seems: Do|33|of|people|watch|T|V. This may seem slightly wrong, but I'm not trying to be perfect here. The digit-surrounded comma and period has already given me enough trouble. Gotta draw the line somewhere. :)
|
|---|