in reply to Frequency of words in text file and hashes

The only punctuation marks you really have to worry about are - and '. If - occurs at the end of the line, the word closest to the end of the line has been split between that line and the next line, and has to be wrapped over. If ' occurs inside a word, it's part of the word and has to be preserved; otherwise, it is removed. All other punctuation marks are removed. A sloppy version of this is as follows:

EDIT: I wasn't accounting for extra spaces on the start, end or middle of each line. Updated code and added comments.

EDIT: Also updated so comments fit properly.

use strict; use warnings; my (%count, $last); my $max = 0; while (<DATA>) { s/^ +//; s/\s+$//; s/ +/ /g; ## Remove extra spaces $_ = lc($_); ## Lowercase so not sensitive $_ = $last . $_; ## Append word piece if (m/-$/) { ## If line ends in -, remove last ($_, $last) = m/(.*?) ?(\w+)-/; ## word piece for appending } else { $last = ''; } ## Else word piece is nothing s/[^\w' ]//g; ## Remove extra punctuation s/(\w)'(\w)/$1-$2/g; ## Convert ' inside words to - s/'//g; ## Remove all remaining ' s/-/'/g; ## Convert - back to ' for (split / +/) { ## Split on space and $count{$_}++; ## process words $max = length() if length() > $max; ## Find longest word size } } print sprintf('%'.$max.'s', $_) . " => $count{$_}\n" for sort { $count{$b} <=> $count{$a} || $a cmp $b} keys %count; __DATA__ I can't be bought, and I won't be bought! My school- house is my own, my precious. One school to rule them all! This is my cant, my creed. Funky space check!

Replies are listed 'Best First'.
Re^2: Frequency of words in text file and hashes
by perl_seeker (Scribe) on Mar 28, 2005 at 11:36 UTC
    Hello Ted,
    thanks for the code and your ideas. I am actually working with text in a font for another language
    (not English), so the tokenisation and translation code does not work for me, but of course works
    great with your test data in English. But anyway, got the idea.

    In this font, a single letter(vowel/consonant), may be mapped to two or more ascii characters, e.g.
    letter 1 in my font = ascii chars sd letter 2 in my font = ascii chars !#
    Though of course we can do the frequency count of words from %count, and also find the no of unique words from it;we are still building %count from an array into which the words have been pushed.
    Right now this works, but if the array were to hold a huge number of words, say 1 million,would this not be a problem? Is there a way around this?
    Thanks,
    perl_seeker:)