luxs has asked for the wisdom of the Perl Monks concerning the following question:

Hello Monks,

I've got very simple script, performing plenty of slow operations.

use LWP::Simple; @h = ('http://addr1', 'http://addr2', 'http://addr3', 'http://addr4'); + # can be much more foreach (@h) { push @a, get($_); }

Q1 How can I change this script to make all 'get' requests simultaneously?

Q2 How can I estimate by script, how many parralel request I can do? I want this script to be efficient on any computers with very different power.

Thanks for your comment.

Replies are listed 'Best First'.
Re: fork / threads script
by atcroft (Abbot) on Apr 05, 2015 at 04:26 UTC

    May I suggest you take a look at Parallel::ForkManager-I have used it a number of times so I would not have to worry about keeping track of the number of processes that are running (just set a number, and when one finishes, it handles spawning off the next one. (Bonus: One of the examples in the documentation is a file downloader.)

    Hope that helps.

Re: fork / threads script
by davido (Cardinal) on Apr 05, 2015 at 05:22 UTC

    The Mojolicious project has one of the most actively developed User Agents to support event-loop style requests. This makes it trivial to make a bunch of requests in parallel, and then just wait for them to all come in. They are all executed immediately, and then the IO loop executes a callback that collects the responses. Here's an example using Mojo::UserAgent and Mojo::IOLoop, which are both part of Mojolicious.

    use Mojo::UserAgent; use Mojo::IOLoop; my @url = qw{ http://example.com http://perlmonks.org }; my $ua = Mojo::UserAgent->new; Mojo::IOLoop->delay( sub { my $delay = shift; $ua->get($_ => $delay->begin) foreach @url }, sub { my ($delay, @tx) = @_; print "'", $_->req->url, "' => '", $_->res->text, "'\n" foreach @tx; } )->wait;

    The output will be something like this:

    'http://perlmonks.org' => '<!DOCTYPE HTML PUBLIC .......', 'http://example.com' => '<!doctype html> .......',

    So in the example above, the first sub implements all the get requests. The requests are fired off in rapid succession. The second sub collects the responses in whatever order they emerge. Querying the response object is an easy way to keep track of which is which. Notice how in my example output I'm hitting example.com first, but perlmonks.org is the first to return. The order in which they return is simply based on what comes in first. But it's not hard to sort out what is what. It's amazing to see when you place dozens if not hundreds of URL's in the list and watch them all come in.

    This is but one of several ways to implement parallel web scraping using Mojolicious. Others are outlined in its documentation.

    I gave another (probably easier to follow) example here: Re: use LWP::Simple slows script down., and here: Re: Most memory efficient way to parallel http request. And a there's a screencast from tempire that demonstrates a non-blocking technique, here: http://mojocasts.com/e5.

    Update: Here's another example based on the snippet from Re: use LWP::Simple slows script down.:

    use Mojo::UserAgent; use Mojo::IOLoop; my @url = qw( http://perlmonks.org http://example.com ); my $ua = Mojo::UserAgent->new; $ua->get( $_ => sub { my( $ua, $tx ) = @_; print "'", $tx->req->url, "' => '", $tx->res->text, "'\n"; } ) foreach @url; Mojo::IOLoop->start unless Mojo::IOLoop->is_running;

    Though the latter seems simpler, the former stops its event loop as soon as the last request comes in, instead of waiting a few moments to realize that it is done listening.


    Dave

Re: fork / threads script
by choroba (Cardinal) on Apr 05, 2015 at 06:25 UTC
    That's why LWP::Parallel exists.
    لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
Re: fork / threads script
by BrowserUk (Patriarch) on Apr 05, 2015 at 09:20 UTC

    If the list of urls comes from a file somewhere, here's a script I had kicking around that fetches them concurrently and writes them to files:

    C:\test>type getUrlList.pl #! perl -slw use strict; use threads ( stack_size => 4096 ); use Thread::Queue; use LWP::Simple; my $Q = new Thread::Queue; sub worker { while( my $url = $Q->dequeue ) { chomp $url; ( my $file = $url ) =~ tr[/:?.^"][_]; #" $url = 'http://' . $url unless $url =~ m[://]; my $status = getstore $url, $file; $status == 200 or warn "$status : $url ($file)\n"; } } our $THREADS //= 4; $Q->enqueue( <>, (undef) x $THREADS ); $_->join for map threads->create( \&worker ), 1 .. $THREADS; ## use as thisScript -THREADS=8 < urls.list

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked