in reply to Apparently strange beahavior that blocks at Filehandle reading

I checked the rest of the program for some strange interaction with that file and filehandle name

That's why you don't use bareword filehandles.

use strict; use warnings; use 5.010; #Create some text files: my @fnames = ('test1.txt', 'test2.txt'); for my $fname(@fnames) { open my $OUT, '>', $fname or die "Couldn't open $fname: $!"; for my $num (1 .. 10) { say $OUT "$fname -- $num"; } } #------------------ sub print_error { say 'error'; } #------------------ #Read the text files: FOR: for my $fname('test1.txt', 'test2.txt') { open my $INPUT, '<', $fname or print_error(); while(my $line = <$INPUT>) { print $line; } print "\n"; } --output:-- test1.txt -- 1 test1.txt -- 2 test1.txt -- 3 test1.txt -- 4 test1.txt -- 5 test1.txt -- 6 test1.txt -- 7 test1.txt -- 8 test1.txt -- 9 test1.txt -- 10 test2.txt -- 1 test2.txt -- 2 test2.txt -- 3 test2.txt -- 4 test2.txt -- 5 test2.txt -- 6 test2.txt -- 7 test2.txt -- 8 test2.txt -- 9 test2.txt -- 10

When the input line operator, <>, reads end-of-file, it returns a false value that causes the while loop to end.

How about using the 3-arg form of open()?

open(DB,"<$F") #compare to: open my $INFILE, '<', $fname or die "Couldn't open $fname: $!"

How about printing out the filename in the loop?

Replies are listed 'Best First'.
Re^2: Apparently strange beahavior that blocks at Filehandle reading
by NewMonkMark (Initiate) on Jun 17, 2011 at 09:45 UTC

    Sorry, I misguded all of you and myself.

    I blamed the filehandle handling for that behaviour, but it has nothing to do with the problem, at least not that filehandle, problem is about the STDOUT, in fact I tested the code through the shell awaiting for some print on STDOUT during the cicle, but due to a long operation inside the loop the unflushed output gave me NO feedback; setting the $|=1; give me each cicle outpup in real time...

    however, thanks to all readers

    On the other hand now I have the problem about unexpected slownees of the following code: (don't care about reply, I only post it here for completness, but more rightly I'll possibly send a new specific post for it)

    $DB_STATS{COMB} is an hash of about other 180000 hashes each one with 2 keys, {Rit} and {Usc}. (COMB keys {$C} are of about 20-25 bytes)

    foreach my $C( keys(%{$DB_STATS{COMB}}) ) { $DB_STATS{COMB}{$C}{Rit}++; }
      $DB_STATS{COMB} is an hash of about other 180000 hashes each one with 2 keys, {Rit} and {Usc}. (COMB keys {$C} are of about 20-25 bytes)

      Could you post the first 10 lines produced by:

      use Data::Dumper; ... dumper( \%DB_STATS );

      ?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Ok I'll show you the structure:

        The attention is on 'COMB' key that reference the hash with 180000 keys of about 20-25 chr. The program must increase all the "lower level" key {Rit} for each "while" cicle that read line by line from the previous discussed Files (9000+ lines).

        I know its a huge work, but who has an idea on how to make it faster? at this time it processes the 180000 values for 50 times (~9 million) in about 12 secs (the whole work could take about 2160 secs)

        Thank you again to all of you!

        %DB_STATS = ( 'COMB' => { '1,2,4,6,9,11,13,15,16,17' => { 'Rit' => 0, 'Usc' => 0 }, '1,2,4,6,9,11,13,15,16,18' => { 'Rit' => 0, 'Usc' => 0 }, '1,2,4,6,9,11,13,15,16,19' => { 'Rit' => 0, 'Usc' => 0 }, 'And 180_000 more Keys...' => { 'Rit' => 0, 'Usc' => 0 }, }, 'Other keys NOT of our interest', ..., ..., );