in reply to Re: Apparently strange beahavior that blocks at Filehandle reading
in thread Apparently strange beahavior that blocks at Filehandle reading

Sorry, I misguded all of you and myself.

I blamed the filehandle handling for that behaviour, but it has nothing to do with the problem, at least not that filehandle, problem is about the STDOUT, in fact I tested the code through the shell awaiting for some print on STDOUT during the cicle, but due to a long operation inside the loop the unflushed output gave me NO feedback; setting the $|=1; give me each cicle outpup in real time...

however, thanks to all readers

On the other hand now I have the problem about unexpected slownees of the following code: (don't care about reply, I only post it here for completness, but more rightly I'll possibly send a new specific post for it)

$DB_STATS{COMB} is an hash of about other 180000 hashes each one with 2 keys, {Rit} and {Usc}. (COMB keys {$C} are of about 20-25 bytes)

foreach my $C( keys(%{$DB_STATS{COMB}}) ) { $DB_STATS{COMB}{$C}{Rit}++; }
  • Comment on Re^2: Apparently strange beahavior that blocks at Filehandle reading
  • Download Code

Replies are listed 'Best First'.
Re^3: Apparently strange beahavior that blocks at Filehandle reading
by BrowserUk (Patriarch) on Jun 17, 2011 at 09:52 UTC
    $DB_STATS{COMB} is an hash of about other 180000 hashes each one with 2 keys, {Rit} and {Usc}. (COMB keys {$C} are of about 20-25 bytes)

    Could you post the first 10 lines produced by:

    use Data::Dumper; ... dumper( \%DB_STATS );

    ?


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Ok I'll show you the structure:

      The attention is on 'COMB' key that reference the hash with 180000 keys of about 20-25 chr. The program must increase all the "lower level" key {Rit} for each "while" cicle that read line by line from the previous discussed Files (9000+ lines).

      I know its a huge work, but who has an idea on how to make it faster? at this time it processes the 180000 values for 50 times (~9 million) in about 12 secs (the whole work could take about 2160 secs)

      Thank you again to all of you!

      %DB_STATS = ( 'COMB' => { '1,2,4,6,9,11,13,15,16,17' => { 'Rit' => 0, 'Usc' => 0 }, '1,2,4,6,9,11,13,15,16,18' => { 'Rit' => 0, 'Usc' => 0 }, '1,2,4,6,9,11,13,15,16,19' => { 'Rit' => 0, 'Usc' => 0 }, 'And 180_000 more Keys...' => { 'Rit' => 0, 'Usc' => 0 }, }, 'Other keys NOT of our interest', ..., ..., );

        Instead of dereferencing the entire multilevel structure each time, if you create an array of references to the scalars you are incrementing, then you can iterate over that array and increment the values in the nested hash in 1/6th the time:

        #! perl -slw use strict; use Time::HiRes qw[ time ]; my %DB_STATS; $DB_STATS{ COMB }{ $_ } = { Rit=>0, Usc=>0 } for 1 .. 18e5; my $start = time; ++$DB_STATS{ COMB }{ $_ }{Rit} for 1 .. 18e5; printf "Full deref took %.3f seconds\n", time() - $start; my @Rits; $Rits[ $_ ] = \$DB_STATS{ COMB }{ $_ }{Rit} for 1 .. 18e5; $start = time; ++$$_ for @Rits; printf "AoR via alias took %.3f seconds\n", time() - $start; print $DB_STATS{COMB}{1}{Rit}; __END__ C:\test>910049 Full deref took 1.511 seconds AoR via alias took 0.247 seconds 2

        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.