There are various things you could do to reduce the memory requirements. For example, you could build up a string of account numbers rather than an array. This would save quite a lot of space:
C:\test>p1 @a = map int rand( 1e16 ), 1 .. 10;; print total_size \@a;; 496 $s = join ' ', @a;; print total_size $s;; 216
Multiply that saving by 37 million and you might avoid the problem. Take it a step further and pack the account numbers and you can save even more:
$a = pack 'Q*', @a;; print total_size $a;; 136
But having looked back at your OP, what you are doing make no sense at all.
It makes no sense to even read the second file as you only output records if they are already in the hash from processing the first file. In other words, having built the hash from the first file, all you need to do is dump its contents and ignore the second file completely
But as your final output file is identical to the first of your input files, except that all the records with the same unique id are grouped together, the simplest, fastest way to achieve that is to just sort that file.
Originally you call your files "DUMP_ID and DUMP_CARD_NO" and then later you talk about "DUMP_ACCT_NO". That combined with the inconsistencies in your posted code make me think that this question is a plant.
In reply to Re^3: Help on file comparison (Just sort!)
by BrowserUk
in thread Help on file comparison
by Danu
For:
Use:
& & < < > > [ [ ] ]