Welcome. This question is great in so much as the modern alternatives are much nicer/easier to work with, so I'm really glad you you asked/posted. Firstly, many universities have webpages associated with their courses, so there's a good chance the actual example code shown here (I gave up watching a little of the way in, honestly no disrespect to anyone intended) may be available online in a sane, downloadable format.
When I talk about nicer/easier ways to work I really mean tools that make the task at hand easier to maintain, more fun to work with and as far as I'm concerned easier to learn/teach. I'd suggest you look at Mojo::DOM which is simply fantastic for working with web based data, along with Mojo::UserAgent. The two can be combined easily as seen in some examples below. Mojolicious has fantastic documentation and examples to get you started with modern web development in perl. Here are some examples/answers to posts here I've implemented in using the above tools. Some are sub optimal and I may get round to updating them one day.
Modern perl has some fantastic alternatives to older tools/methods which are really worth exploring. Please let us know if you have any follow on questions.
| [reply] |
Welcome to perl. Hope you enjoy your time with this language. Having experience with PHP, you will likely see a lot that looks familiar.
As to your question regarding a web crawler, may I suggest taking a look at Web Client Programming with Perl (available through the O'Reilly Open Books Project) and the LWP cookbook. It may seem a bit dated (1st. edition March 1997), but may provide a good background, and includes an example (in Chapter 6) of a recursive client. (You may also want to look at Merlyn's columns as well.)
Good luck with the project. Hope that helps.
| [reply] |
| [reply] [d/l] |
| [reply] |