Been a while since I posted a question, and I've ran through super search a few ways today - Looking for advice on the following question:
What is the fastest way to merge (and de-dup) two large arrays?
I can already merge my two arrays and de-dup them, with the code below...
IMPORTANT NOTE: This code is already using a bunch of forks (not shown), on account of all the trips through the loop doing a smartmatch. I'm already pushing all cpu cores pretty hard, so threading/forking should not be part of your recommendation unless it would be a very different approach
# Depending on your version, you may need: no warnings 'experimental::smartmatch'; # I have two arrays, @rows and @data # @data is a pre-loaded array of strings (~60k elements) # @rows is a much larger array of strings (~600k elements) foreach my $rawData (@rows) { if( $rawData ~~ @data ) { # don't do anything, already in array } else { push(@data,$rawData) } }
My first thought is: am I doing this backwards? If @rows is much larger, should I just send the loop through @data instead (which would avoid 500k+ iterations?
My second thought is: This is the kind of thing Perl does very well - there must be something faster out there, besides iterating over every element individually. Maybe using hashes instead (nothing needs to be "in order" here)? As it is, this runs for hours using up 8 cores
In reply to Fastest way to merge (and de-dup) two large arrays? by technojosh
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |