Dear Monks,
First let me thank you for accepted my registration and given the chance to interact the people like you who all are guiding us to become the Monster.
Thank you once again.
Let me start my first interaction for getting the suggestion instead providing. I have a question for file handling. I have two files named DUMP_ID and DUMP_CARD_NO and both the files contains the records of 37 million (this might be increase/decrease some quarter).
The scenario is - DUMP_ID contains the unique id of the card member and DUMP_CARD_NO contains unique id and the corresponding card account number. The unique id can have more than one card account number depends on the region/locale. Here I have to take a record from DUMP_ID and compare this unique id with DUMP_CARD_NO’s unique id; if it matches then unique id and the corresponding account number has to be write into an another file(remember the single unique id can have more than one card account number).
Currently, I take all the IDs from DUMP_ACCT_NO. and moved into the hashmap.
while (<DUMP_ACCT_NO>) { chomp; my ($id, $accno) = split /\|/, $_; push @{$hashList{$guid}}, $accno; }
Then take the ID from DUMP_ID and do the comparison with this hashmap.
while(<DUMP_ID>){ chomp; s/\s+//g; if ($hashList{$_}) { for my $accno (@{$hashList{$_}}) { print FINAL_FILE "$_|$accno\n"; } } }
Here the problem is, we are getting the out of memory; thus makes unable to complete the execution. Kindly guide me to go with the right approach.
Thank you.
In reply to Help on file comparison by Danu
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |