Hi All
The task I have at hand is as follows:
There is a webpage which changes periodically, and when it does, it displays a button which I need to click as fast as possible (basically before anybody else does).
The way I have implemented this so far is to write a script which uses curl to download the page, grep it for the "button" part, and if it finds nothing download again until change is found.
When change is found I also use curl to press the button which works ok as far as I'm concerned (although no doubt it could be improved).
The part I am not too pleased with is the one that monitors the webpage:
-Firstly it uses a lot of bandwidth, comparatively, since it's endlessly downloading the same thing over and over again...
-Secondly it is relatively resource greedy.
What I would like help with is making a decision as to whether by using LWP, HTTP::Monitor, HTTP::Tiny, or any similar module I might be able to improve on this. Or perhaps simply switching to wget. I'm quite ignorant indeed as to the performances of all these tools for this kind of job.
Another thing I was wondering is if there is a way to save time when the webpage actually does change, by interrupting the download for instance. I know LWP provides some kind of "loop back" as the download is proceeding, but I don't know quite how I could implement that.
I'm sorry if this question is out of scope, or too wide, or not well asked. Any help will be appreciated.
Thank you!
Mark.
In reply to Fast efficient webpage section monitoring by Marcool
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |