in reply to Re^2: Perl threads to open 200 http connections
in thread Perl threads to open 200 http connections

The first thing I notice is that there are things wrong with the code you've posted.

You have use strict; but $thread is never declared?

Also, you are attempting to start 200 threads, but only waiting for one to complete.

Try this:

#! perl -slw use strict; use threads ( stack_size => 4096 ); use threads::shared; use LWP::Simple; use Time::HiRes qw[ time sleep ]; our $T ||= 200; my $url = ### your url (of the actual file!) here ###; my $running :shared = 0; my $start = time; for( 1 .. $T ) { async( sub{ { lock $running; ++$running }; sleep 0.001 while $running < $T; my $id = shift; getstore( $url, qq[c:/test/dl.t.$id] ); --$running; }, $_ )->detach; } sleep 1 while $running; printf "Took %.3f seconds\n", time() - $start;

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP an inspiration; A true Folk's Guy

Replies are listed 'Best First'.
Re^4: Perl threads to open 200 http connections
by robrt (Novice) on Aug 04, 2010 at 13:07 UTC

    Thanks! I executed the prg that you gave, but it failed throwing the below error: Free to wrong pool 234eb0 not 11475c80 at C:/Perl/lib/Errno.pm line 16.

    Also, in my original prg, I was trying to download 200 different files changing the URL each time, but in the prg that you gave, looks like it downloads a single file 200 times.. is that right ?

    If you dont mind, could you please explain your program, (short desc for each line) as its confusing for me the whole concept of async, also the way you have coded to prg, I could not understand it. Thanks again!

      Do you get the error with a newer Perl?

      That message indicates a bug in Perl or in an XS module.

      but it failed throwing the below error:

      Hm . Runs fine under 5.8.9.

      Also, in my original prg, I was trying to download 200 different files changing the URL each time, but in the prg that you gave, looks like it downloads a single file 200 times.. is that right ?

      Sorry. I don't have a handy source of 200 different files at my disposal.

      #! perl -slw use strict; use threads ( stack_size => 4096 ); use threads::shared; use LWP::Simple; use Time::HiRes qw[ time sleep ]; our $T ||= 200; ## This can be change by a command line arguement -T=n +nn my $url = ## your url here ##; ## This shared variable counts ## the number of running threads my $running :shared = 0; ## This records the start time my $start = time; ## For 1 to 200 for( 1 .. $T ) { ## start a new thread async( ## running this sub sub{ ## Increment the running threads count { lock $running; ++$running }; ## Make all threads wait until all threads are running ## so that the download requests all hit the server at the + same time sleep 0.001 while $running < $T; ## The number (1..200) passed in as $_ below. my $id = shift; ## get $url and store it in a file with $id as part of the + name getstore( $url, qq[c:/test/dl.t.$id] ); ## Now this thread is finished, decrement the count lock $running; --$running; }, $_ ## $_ (1..$T) becomes $id inside. ## Detach means that the threads go away as soon as they are done. ## Rather than hanging around consuming resources waiting to retur +n ## a return value to join that we have no interest in. )->detach; } ## Now the main thread just sleeps till all the d/l threads have finis +hed. sleep 1 while $running; ## And tells you how long the whole thing took printf "Took %.3f seconds\n", time() - $start;

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Thanks alot man for your help and patience. Finally I used apache bench marking tool. Started 1000 connections and successfully tested the network. Thanks all!