You can speed up your task by : optimizing parsing, optimizing url fetching and finally, by fetching several urls in
parallel.
If you are looking at your code, you realize most time is spent in blocking
get (more precisiously, in slow system call
read)
So first (and I think last :) ) point to optimize your script - make url fetching in
parallel
So let's observe several ways to do this and point advantages and disadvantages :
- using some
fork solution. You fork process and download each URL in own process. Excellent example was shown before :
Parallel::ForkManager
Advantages : straigt-forward lazy solution. Will work fine for you now
Disadvantages : it will take additional system resources. And you will have to realize some IPC between processes (not so trivial task !) if (when) you wonna to combine results together
threads : Way how this task usually do in C. We make sepearate threads for every URL. So each LWP get will be done in own thread simult with others
Advantages : Easy to implement IPC. Standart solution
Disadvantages : iThreads require many system resources and not very fast :(
- using
non-blocking sockets : instead of
block if no data ready in socket,
read will return immidiately return EINPROGRESS error. So you can make some evnet loop thru set of open sockets : Can we read from this socket ? Read and parse : Try next socket. Check
Parallel User Agent
Advantages : Fast. Save resources.
Disadvantages : A bit complicated in programming