a straight grep
The best way to get an idea is to measure, that is, produce several fake input data sets, increasing in size, representative of the data you expect to get in the future, and benchmark to see the performance of the various approaches. You've said "grep" twice now, but haven't shown an example of that, so without that, we can't really talk about performance comparisons objectively.
As for the code shown so far, I think the Perl code I posted should have a significantly smaller memory footprint than cut | sort | uniq (or cut | sort -u, as hippo said), since the only thing my code keeps in memory is the resulting output data set (that is, the keys of the hash; the numeric hash values shouldn't add a ton of overhead). I haven't measured yet though! (it's Saturday evening here after all ;-) )
| [reply] [d/l] [select] |
Hi, haukex will provide his own answer no doubt, but: No, the memory footprint should not grow, since
while ( my $line = <$FILEHANDLE> ) { ... }
does *not* slurp the entire file into memory, but reads it one line at a time. See, for example, https://perldoc.perl.org/perlfaq5.html#How-can-I-read-in-an-entire-file-all-at-once%3f for a discussion of the issue.
The way forward always starts with a minimal test.
| [reply] [d/l] |
The memory footprint may not grow very fast, but it will most probably grow because the %seen hash is very likely to get larger with a bigger file (unless the data input has really many duplicates when the file grows larger).
| [reply] [d/l] |
| [reply] |
That snippet will only store the result dataset (i.e. the unique keys). If you anticipate result-sets larger than the available RAM, you'll have to revise the general approach (use a database) since none of the straight-up solutions will be workable in such a case.
| [reply] |