One of the more interesting things on my $work 'to do' list is something very similar, but for a much higher number of documents. I plan to investigate using Elasticsearch, combined with a Mojolicious front end for querying/displaying results. I'm only at the state of initial investigations into Elasticsearch, and this is to replace a legacy solution which is showing its age.
| [reply] [d/l] |
the weakest link is OCR. But if you are only interested in keywords (as opposed to the complete text) then even if OCR's output is incomplete, there are probabilistic methods to complete (and even validate) the OCR'd keyword. If you want to adjust these methods to your context then you need to manually convert some representative set of documents to text (or manually correct OCR's output for those documents only) and feed that to your methods. That assumes (enough) documents belong to a single context, e.g. legal or spy reports, I guess.
Once you have the document text, there are various open-source search frameworks to use, as marto mentions, and it will be free-wheeling from there on.
What I would not do is form the filename from keywords. I would rather give each file a unique number id. Then use your already implemented search engine to search. If your documents are already indexed on some keywords, e.g. Report 5,5/12/12,ABC.vs.XYZ then, optionally, process it and insert that into DB too to be used to enhance your search engine.
| [reply] [d/l] |
"...Once you have the document text, there are various open-source search frameworks to use...
Elasticsearch has plugins such as fscrawler to deal with all that for you.
| [reply] |
| [reply] |
BTW. Don‘t know if it’s still alive and well. Best regards, Karl
«The Crux of the Biscuit is the Apostrophe»
perl -MCrypt::CBC -E 'say Crypt::CBC->new(-key=>'kgb',-cipher=>"Blowfish")->decrypt_hex($ENV{KARL});'Help
| [reply] [d/l] |
I don't think MySQL is the right kind of database for this. I'd stuff the texts in SOLR or some other fulltext search.
Jenda
Enoch was right!
Enjoy the last years of Rome.
| [reply] |
Honestly, with only 10k files I'd probably do steps 1 and 2 and then use ripgrep or the silver searcher to do the string searches. If that ends up being too slow, you could use a bunch of different already mentioned tools to speed up the process.
πάντων χρημάτων μέτρον έστιν άνθρωπος.
| [reply] |