in reply to Re^4: Memory issue with large array comparison
in thread Memory issue with large array comparison

+ + aaron_baugher; I didn't even notice the similarity ...and shame on me for that, as it means it's no answer to OP's original dilemma.

I was, I realize now (thanks to your watchfulness), obsessing on the multiple responses offering use of a hash as a solution. I still think those represent something close to cargo-culting a meme, (rather than actual code) -- but not an optimal solution, since, if I read the wisdom of the sages correctly (and if they're right, of course), using a hash would be at least as memory intensive and probably more so.

That's also an issue with map and grep (cf Eliya's observations, above), but perhaps less so than using a hash (that's another test that I haven't undertaken, but which might lead to a publishable finding). And in the same node, Eliya makes a cogent point (echoed in slightly different context by dave_the_m's code: there are a variety of ways to attack OP's problem with reduced memory demand. Yet another might be a step-wise solution: first, separate the id portion of the first dataset to a file of it's own; then identify the ids in the second file that don't have identical (or identically normalized, if that's involved, too) values.

But, again, ++ for casting a sharp eye on the prior responses.

  • Comment on Re^5: Memory issue with large array comparison

Replies are listed 'Best First'.
Re^6: Memory issue with large array comparison
by aaron_baugher (Curate) on May 26, 2012 at 03:27 UTC

    Thanks for the compliment, ww. You have a point, that sometimes when everyone comes out with the same suggestion, it reflects cultish thinking. But sometimes it means there really is one best way to do it. When the problem is "find strings from one list in another list," it's just pretty hard to beat a hash lookup for speed and simplicity, and this was a pretty typical case. A hash lookup is so superior to other methods that it makes sense to go to it automatically -- without thinking, even -- unless there's some reason it won't work. It's like using strict: you should always use it unless you know enough to know when not to use it.

    On the memory issue, I'm really not sure why the grep-in-a-grep solution ran the OP out of memory. Maybe it causes temporary lists to be built in memory? In any case, a hash isn't all that memory intensive. I created a 10,000 item array, and then turned it into a hash's keys. The hash took 150% as much memory as the array. So edge cases where you have enough memory to use an array but not a hash will be unusual. I agree that solving the problem in less memory could be an interesting challenge, but only worth tackling if a hash lookup fails first.

    Aaron B.
    Available for small or large Perl jobs; see my home node.