in reply to Parsing Large files

Well, here's my solution:

use strict; use warnings; use 5.010; open my $dfd, '<', '/usr/share/dict/british-english' or die $!; my %dict; while (<$dfd>) { next unless /^[a-z]+$/; push @{$dict{hash($_)}}, $_; } close $dfd; while (<>) { exit if /^$/; say join '', @{$dict{hash($_)}} if $dict{hash($_)}; } sub hash { join '', sort split //, shift; }

Replies are listed 'Best First'.
Re^2: Parsing Large files
by hodge-podge (Novice) on Apr 30, 2010 at 19:09 UTC
    Wow, well thanks a lot, your solution is a lot more eloquent than mine would have been. Surprisingly fast to. I follow it for the most part but I get tripped up in a couple of places. If I am not mistaken you are opening the dictionary as output, loads all the words into a hash. Then gets input...but what exactly are you doing here?  say join '', @($dict{hash($_)}} if $dict{hash($_)}; Thanks for the help.

      If you don't understand a data structure, you should use Data::Dumper.

      If the dictionary contained

      act cat dog god zebra

      It builds the following hash

      %dict = ( act => [ 'act', 'cat' ], dgo => [ 'dog', 'god' ], aberz => [ 'zebra' ], );

      If the user enters dog, it searches for dgo (hash("dog")) and lists what it finds (dog and god).