Ntav has asked for the wisdom of the Perl Monks concerning the following question:
Now this has (at least) two problems which I need to solve: 1/ each of the pages is retrieved in turn whereas given the speed of the server the script is run on I want to get them all at once, so Q1: how do I implement multithreading here? 2/ if a page fails to respond I dont want the script to wait any more than N seconds before moving to the next, so Q2: how do I time a (sub)process and kill it after N seconds? Thanks for any help, Ntavuse LWP::Simple; #each page is actually processed in a subroutine but you get the idea $page1 = "http://www.first.com"; $page1 = get($page1); #rest of the pages in same format here $pageN = "http://www.last.com"; $pageN = get($pageN);
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: retrieving multiple web documents
by wog (Curate) on Aug 30, 2001 at 06:09 UTC | |
by tomhukins (Curate) on Aug 30, 2001 at 15:46 UTC | |
|
Re: retrieving multiple web documents
by Zaxo (Archbishop) on Aug 30, 2001 at 06:18 UTC |