Hi Monks,
I have a plain text file which is large, now contains about one lakh words, may contain upto one million or so words in future.

I also have with me a simple script, which splits the text on whitespace into an array, removes punctuation symbols,
and counts the no of total words. It then counts the number of unique words after removing duplicate words.
I also need to list the words top down, in terms of their frequency of occurence in the original text(before removing duplicates)
Most frequently occuring word no of times occured second most frequent word no of times occured .... .... .... ....
This bit of code which I got from the Perl cook book, uses a hash to count the number of times each word occurs.

%count = (); foreach $element (@words) { $count{$element}++; } while ( ($k,$v) = each %count ) { print "$k => $v\n"; }
Printing the hash gives me the words and their frequency counts, but how do we sort this list to have the most
frequently occuring one at the top, then the second most frequent, and so on? e.g.
the 150 it 85 we 60 are 40
Also, since I need to use a very large file, is it possible to do the whole exercise using a hash: split text
into a hash, count the no. of total words, remove duplicates, count no. of unique words, and also do the frequency count

Here is the code I have reading the words into an array:
sub lexicon_generate { open CP, 'tcorpus.txt' or die $!; #Open file. my @words; while(<CP>){ chomp; push @words,split; } close CP; #print "\n@words\n"; $lwords=@words; #print "\n$lwords"; for($i=0;$i<$lwords;$i++) { #print "\nThis is the next token:"; #print "\n$words[$i]"; } #Remove punctuation marks. foreach my $item(@words){ $item=~ tr/*//d; $item=~ tr/(//d; $item=~ tr/)//d; $item=~ tr/""//d; $item=~ tr/''//d; $item=~ tr/?//d; $item=~ tr/,//d; $item=~ tr/. //d; $item=~ tr/-//d; $item=~ tr/"//d; $item=~ tr/'//d; $item=~ tr/!//d; $item=~ tr/;//d; $item= '' unless defined $item; #print "\nThe token after removing punctuation marks:"; #print "\n$item\n"; } #Number of words in @words before removing duplicates. $lnwords=@words; #print "\n$lnwords"; foreach my $final_thing(@words){ #print "$final_thing\n"; } + #Remove duplicate strings. my %seen = (); my @uniq = (); foreach my $u_thing(@words) { unless ($seen{$u_thing}) { #if we get here, we have not seen it before $seen{$u_thing} = 1; push (@uniq,$u_thing); } } #print"\nThe unique list:"; #print "\n@uniq"; #Number of words in @words after removing duplicates. $luniq=@uniq; #print "\n$luniq"; open LEX,'>tcorpus_unique.txt' or die $!; foreach my $u_elt(@uniq){ #print "\n$u_elt"; print LEX "\n$u_elt"; } close LEX; } &lexicon_generate();
Any sample code using a hash would be most appreciated.
Thanx,
perl_seeker:)

2005-03-18 Janitored by Arunbear - added readmore tags, as per Monastery guidelines


In reply to Frequency of words in text file and hashes by perl_seeker

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.