Aha - sorry, too quick to jump to conclusions.
Anyway, good point. I'm using split(/ /) to split on spaces. Here is some of the very skeleton of my code that does nothing but count misspellings:
@files = glob "*";
foreach $file (@files) {
open(INPUT, $file);
while(<INPUT>) {
@line = split(/ /);
foreach (@line){ $errors++ unless $d{$_}; }
}
close(INPUT);
}
Are there faster ways to split strings? My files (as I have formatted them) have one sentence per line.
I've also been playing around with this more, searching for the same results you just showed here. Going from doing only the hash look up in my innermost loop to doing nothing at all but look at the word shaved off only 20% of my time, so 80% is getting there. 20% still adds up to 16 hours or so for the whole corpus, but it does suggest that better savings lie elsewhere. Using /usr/bin/time I've found that the process's "user time" is only marginally less than the "elapsed (wall clock) time", which, if I understand correctly, means that very little time was spent waiting for I/O or for the CPU, so it must all be in this program - seems like the majority is in the splitting.
If only I'd started my dissertation two months ago, and then I could not worry about this and just let my various pre-processing tasks run for a week and get on with it. Anyway, thanks for your help - I welcome any thoughts or suggestions, though that may lead somewhat off the topic of hashes. It's certainly the case that the hash lookups are bloody fast! If my math is correct, my mediocre machine can spell check about 180 billion words against a 100,000 word dictionary in 16 hours of CPU time. Yikes. |