Well, if you are untroubled by the prospect of requiring 350 MB of ram, I can certainly see how you would be untroubled by the prospect of EBCDIC incompatibility.
If raw performance is the _only_ criteria, then granted, giganto-hash wins. Unless the script is a one-off, though, it's a poor choice, because, as you note, the algo falls apart if the criteria change even slightly, and most scripts have to be maintained. It demonstrates that hashing is an efficient way to test uniqueness, but that's not exactly shocking news, is it?
There's an awful lot of esoterica in this thread: solutions which don't scale, which are painfully verbose and/or obtuse and/or "clever", which savage the KISS principle, etc. Since this is largely an academic exercise, it's necessary and important to push the boundaries and explore techniques which are wildly unbalanced. And yet the reasoning bears little resemblance to the approach I take when there's code that needs to be optimized.