This will read the entire file into RAM. No problem for 100 kBytes, big trouble for big files
Yes of course that's true.
But given the apparent nature of the data, I feel it's safe to assume that the file size will be small relative to available RAM and paging files. If it were more than a few lines then processing it using Perl is almost certainly the wrong approach. For a file large enough to be a problem, Perl should be reading in one line at a time and loading it into a database when the desired result of getting the first occurrence becomes trivial.
So for a big huge file, the question needs asking on SQLMonks (wishful thinking...)
In reply to Re^3: How can I keep the first occurrence from duplicated strings?
by Bod
in thread How can I keep the first occurrence from duplicated strings?
by Anonymous Monk
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |