in reply to Multiple Extraction from Multiple Files
I do not fully understand your problem. However, a general solution for handling very large files, rather than using memory, is to perform an on-disk sort. Any good sort-utility (or module) can handle a 100MB file quite easily.
When you sort the file, all of the occurrences of any given key-value will be adjacent, and any gaps between key values are known to be empty. Furthermore, two identically-sorted files can be matched and merged, without searching.
You say that you are “searching for more than one string.” If the number of strings being searched-for is reasonable to put in an in-memory hash, then you can simply read each file sequentially, throw the matching records into another file, then go back and process that output file.
If the number of strings is “much larger,” then you have a classic MERGE situation. Place the strings into a file and sort it. Sort each of the 40 files in turn and merge them against that key-file.