I'd say just use a hash. 10MB is not much. I have a few scripts
that process even larger files with ease using perl hashes.
I need to uniq based on FIELD(s)...
However, if you are intent on staying away from having
to deal with 'large' hashes, could you please elaborate more
on what is involved in 'uniq'? Do you simply want to weed out
similar records (e.g. collapse large data files)? Or, sort the file
based on certain fields?
|
"There is no system but GNU, and Linux is one of its kernels." -- Confession of Faith
|