Here is what my code does:-
I have a file having 20+ millions of lines/records. ( say a.txt)
I have another file having around 600 lines/records (say b.txt) These lines have some categories. So, a category is matching to more than one line/records.
now; what my code does is:
1. Create a Hash out of b.txt ( key = category ; value=some mandatory part of the records ).
2. Read every record from a.txt and check if it matches with any of the mandatory part of the records ; if yes, create a file of that category and dump that entire line/record into that category.
So, every record (of 20+ millions) is getting compared with some (roughly saying) 600 odd records ( if we consider the match found would be the last record - in worst case )
And thats where the whole processing/looping is happening.
Please help. how can I expedite the process ?
In reply to Re^2: how to split huge file reading into multiple threads
by sagarika
in thread how to split huge file reading into multiple threads
by sagarika
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |