in reply to Re^2: Advice for optimizing lookup speed in gigantic hashes
in thread Advice for optimizing lookup speed in gigantic hashes
BrowserUk appears to have this covered on technical grounds, but I also wanted to note: I'm not trying to re-invent the spell checker. If counting typos was all I wanted to do then your idea would certainly get the job done. But, as my post says, I want to spell check individual words during a lot of other processing.
If you must know, I have a million books digitally scanned by the Open Library, and a lot of these books are really old and the character recognition isn't good enough to make the result even slightly useful. I'm running a boat load of stuff on this big corpus (one time preprocessing as well as, later, as-needed lookups) so I wanted to chuck out entirely those books that are so garbled as to be meaningless, to save time and space. So, while I process them I spell check them, and if the number of spelling errors per line reaches a certain threshold after a certain number of lines, I discard the book.
I had forgotten aspell, though, so I did give it a go. However, unless I'm missing something, I need to do a bit of parsing in order to understand the response aspell gives. In addition, I'll have start a separate instance of aspell for every sentence, and there are literally billions of sentences, so that would be some overhead.
If you do have any ideas, however, about how I could extract the abovementioned process into something in which I could run aspell over a single file to get my results, I'd be interested to hear it.
|
|---|