edit: Added teddy bear, removed foot from mouth
That would also work, but I think it would risk using more memory than loading the first file into an array, since you'd risk loading hash elements upon which you will not be reporting.
However, because it's you, I naturally presume you have a reason for using that approach, and I'm concerned that I can't see why.
So I guess I'll bite the bullet and ask -- Why do you recommend loading the ticker info to hash first?
However, I just asked my teddy bear and the answer came back clearly: My way trades away execution efficiency to gain space efficiency. That is silly given the OP specifically stated there would only be a few thousand lines. The linear search via grepfor every line of the ticker file is a complete waste.
:: sigh ::
I'll go and hide in my corner now.
| [reply] [d/l] |
Yes, I guess you've got the idea: using a hash is by far the fastest (and also easiest) method, provided the data will fit into memory, which we know to be the case.
| [reply] |
Thanks a lot.
Now I want to take this to the next level. We get files from vendors in different format for a product and I want to transform them to one output format.
Output file:
Ocol1, Ocol2, Ocol3, Ocol4, Ocol5, Ocol6, Ocol7, Ocol8, Ocol9, Ocol10.
Vendor files:
Vendor1:
Vcol1, Vcol2, Vcol3, Vcol4, Vcol5, Vcol6
Source-target mapping:
Vcol1->Ocol1
Vcol3->Ocol2
Vcol4->Ocol4
Vcol5->Ocol8
Vendor2:
Vcol1, Vcol2, Vcol3, Vcol4, Vcol5, Vcol6, Vcol12, Vcol15
Source-target mapping:
Vcol1->Ocol12
Vcol3->Ocol2
Vcol4->Ocol15
Vcol5->Ocol8
I want to define input/output/source-target mapping in config files and want to write a generic perl program that reads the config files and generates output file.
In the case of shell script, I can create config files and include them in my script.
Would really appreciate some pointers here.
Thanks a lot
| [reply] |