in reply to Parallel::ForkManager is time consuming...takes too long

You are wasting your time trying speed this up using multi-processing when a simple, single-threaded piece of Perl can process your 100,000 lookups far faster than your 30 second target.

Less than 2/10ths of a second in fact:

#! perl -slw use strict; use Time::HiRes qw[ time ]; our $N //= 1e5; my @strings = map sprintf( '%18.18b', int rand( 2**18 ) ), 1 .. $N; my %lookup = ( '101010101010101010' => [ 1350, 9234, 8889 ], '010101010101010101' => [ 1345, 2234, 3689 ], '111111111000000000' => [ 2256, 3370, 1340 ], ); my $start = time; my @vals = @{ $lookup{ $_ } // [0,0,0] } for @strings; printf "$N lookups took %f seconds\n", time() - $start; __END__ C:\test>junk 100000 lookups took 0.137553 seconds

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^2: Parallel::ForkManager is time consuming...takes too long
by esolkc (Acolyte) on Aug 15, 2011 at 17:18 UTC
    This example you have provided is indeed much faster than we had implented. I like to use the same style to solve a similar issue. I have two integer arrays that will be used in a sub-routine within if{} expressions. Each if{}-expression returns some values, as the afoementioned example. I assume I can not use the => operator for the hash in this case.

      Post some code, or pseudo code to explain what you mean, because I cannot makes sense of your description.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Let me rewrite ...