2 million * 100 * 2 is much less than 1e9 so should easily fit into memory even for oldish computers. A simple Perl hash based technique used for finding unique strings should work well and is both quick to write and quick to execute. If the data were 5 times as big it's likely that processing would start to slow down some, but even then processing would likely only take a few tens of seconds plus the time needed to read the data from disk.
In reply to Re: Compare 2 very large arrays of long strings as elements
by GrandFather
in thread Compare 2 very large arrays of long strings as elements
by onlyIDleft
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |