in reply to get difference of two arrays
Just as a side note, this:
is a relatively poor way of using memory: loading the file into an array to immediately store it into a hash and no longer use the array is not very efficient. I would rather iterate on the file and load directly into the hash. Something like this:my @bl = <QA>; my @a = <FA>; my %h; @h{@bl} = @bl;
Or possibly:my %h; while (<QA>) { $h{$_) = 1; }
although this:%h = map {$_, 1} <QA>;
might be better.%h = map { chomp $_; $_, 1} <QA>;
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: get difference of two arrays
by RMGir (Prior) on Nov 14, 2013 at 13:30 UTC | |
What do you think of ? It still uses more memory than iteration, since it creates the (potentially large) temporary array to index, but on the flipside it's likely faster than the iteration approach for sane-sized files, I think... Edit: benchmarks: Re-Edit: benchmarks for different array sizes, and added "undef @h{@data}", thanks [id://hdb]: These benchmarks are in-memory only - no file access involved, since I wasn't sure how to make sure caching didn't impact the results without creating a whole bunch of temp files and making sure the disk cache was trashed. They're also just for the "load the data into the hash" micro-step, not the whole "compute the difference" operation. Results: ========== ARRAY SIZE: 10 Rate @tmp=@arr @tmp=undef loop=1 loop=undef @h{@data}=1 @h{@data}=undef undef @h{@data} @tmp=@arr 49764/s -- -33% -54% -55% -64% -66% -67% @tmp=undef 74467/s 50% -- -31% -33% -46% -48% -51% loop=1 107884/s 117% 45% -- -3% -22% -25% -28% loop=undef 111002/s 123% 49% 3% -- -20% -23% -26% @h{@data}=1 138688/s 179% 86% 29% 25% -- -4% -8% @h{@data}=undef 144398/s 190% 94% 34% 30% 4% -- -4% undef @h{@data} 150886/s 203% 103% 40% 36% 9% 4% -- ========== ARRAY SIZE: 100 Rate @tmp=@arr @tmp=undef loop=1 loop=undef @h{@data}=undef @h{@data}=1 undef @h{@data} @tmp=@arr 5566/s -- -36% -57% -57% -65% -66% -66% @tmp=undef 8660/s 56% -- -33% -34% -45% -46% -47% loop=1 12992/s 133% 50% -- -0% -18% -20% -20% loop=undef 13035/s 134% 51% 0% -- -17% -19% -20% @h{@data}=undef 15754/s 183% 82% 21% 21% -- -3% -3% @h{@data}=1 16162/s 190% 87% 24% 24% 3% -- -0% undef @h{@data} 16214/s 191% 87% 25% 24% 3% 0% -- ========== ARRAY SIZE: 1000 Rate @tmp=@arr @tmp=undef loop=1 loop=undef @h{@data}=undef undef @h{@data} @h{@data}=1 @tmp=@arr 534/s -- -38% -59% -59% -66% -67% -67% @tmp=undef 860/s 61% -- -34% -35% -45% -46% -46% loop=1 1309/s 145% 52% -- -1% -16% -18% -18% loop=undef 1316/s 147% 53% 1% -- -15% -18% -18% @h{@data}=undef 1555/s 191% 81% 19% 18% -- -3% -3% undef @h{@data} 1601/s 200% 86% 22% 22% 3% -- 0% @h{@data}=1 1601/s 200% 86% 22% 22% 3% 0% -- ========== ARRAY SIZE: 10000 Rate @tmp=@arr @tmp=undef loop=1 loop=undef @h{@data}=1 undef @h{@data} @h{@data}=undef @tmp=@arr 51.5/s -- -33% -57% -60% -64% -65% -66% @tmp=undef 77.3/s 50% -- -36% -40% -45% -48% -49% loop=1 121/s 135% 56% -- -6% -14% -19% -20% loop=undef 128/s 149% 66% 6% -- -9% -14% -15% @h{@data}=1 141/s 175% 83% 17% 10% -- -5% -6% undef @h{@data} 149/s 189% 93% 23% 16% 5% -- -1% @h{@data}=undef 151/s 193% 95% 25% 18% 7% 1% -- ========== ARRAY SIZE: 100000 Rate @tmp=@arr @tmp=undef loop=undef loop=1 @h{@data}=1 @h{@data}=undef undef @h{@data} @tmp=@arr 3.72/s -- -35% -61% -62% -67% -68% -69% @tmp=undef 5.74/s 54% -- -39% -41% -49% -51% -51% loop=undef 9.42/s 153% 64% -- -4% -17% -20% -20% loop=1 9.80/s 164% 71% 4% -- -13% -16% -17% @h{@data}=1 11.3/s 204% 97% 20% 15% -- -4% -4% @h{@data}=undef 11.7/s 216% 104% 25% 20% 4% -- -1% undef @h{@data} 11.8/s 218% 106% 26% 21% 5% 1% -- Mike | [reply] [d/l] [select] |
by hdb (Monsignor) on Nov 14, 2013 at 14:01 UTC | |
Or even
| [reply] [d/l] |
by RMGir (Prior) on Nov 14, 2013 at 14:07 UTC | |
Mike | [reply] |
by Laurent_R (Canon) on Nov 14, 2013 at 23:43 UTC | |
Hmm, your benchmark is very interesting in terms of comparing various ways of storing an array into a hash, and I'll definitely keep the data somewhere for my own benefit, but I am not sure this benchmark is really relevant to the OP, which was about reading a file into memory. If the I/O take much more time than working in memory, then, where you have, say, a 20% performance improvement with an array into a hash, it might only be a 2% performance improvement when reading a file is involved, and the improvement is probably really not worth the trouble in this case. My point in my previous post was about avoiding using too much memory, rather than CPU usage. I am working daily with very large files, and quite often with really huge ones. Most of the time, I do not care too much whether my program processing 100 million records will run in 10 or in 20 minutes, but I do really care whether it will go to completion or blow up for lack of memory. My work is very often to compare two very large files. Quite often, the data volume will simply not fit in memory. My strategy in such cases is often to first sort the files according to some unique key (for example with the Unix sort utility), and then to compare them line by line (which is not as easy as it might look, reading two files in parallel is not so easy when you may have missing lines on one of the file or the other). But once you've got the algorithm right, this is really very fast. Well, I might have been carried away, I just wanted to say that CPU usage is not necessarily the ultimate goal, sometimes memory usage is far more important (at least when it make the difference between a program that dies before completion or that goes smoothly to the end). | [reply] |