in reply to Large file, multi dimensional hash - out of memory

Just for completeness an approach for the unsorted case.¹

/foo/bar/baz/123 aaa /foo/bar/baz/123 aab /foo/bar/baz/123 aac /foo/bar/baz/124 aaa /foo/bar/baz/124 aab

You could still be able to profit from automated swapping of unused data-structures by using more hierarchies of hashes and splitting the paths at '/'.

For instance $duplicates{foo}{bar}{baz}{123}++ would only involve some comparatively small hashes kept in memory.

The drawback would be that swapping to HD costs time! That's why you should try to minimize swapping by concentrating on only some hashes currently loaded.

That means you would need multiple runs scanning your file, while you concentrate on a part of "unfinished" hashes (e.g. all path starting with /foo/bar and so on)

I think always loading 1 million lines into memory (i.e. an array) and scanning them in multiple runs could lead to a balanced mix of time and memory complexity.

And already scanned elements could be marked by delete $arr[$index]

Cheers Rolf

( addicted to the Perl Programming Language)

UPDATE

¹) I just realised that using multiple hash-levels elegantly solves the case of sorted input. You won't need multiple runs, the sorted input will effectively minimize the amount of swapped hashes.