Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hello Monks and Monkettes.

I am attempting to download the following page:

https://webapp4.asu.edu/catalog/course?s=MAT&n=243&c=DTPHX&t=2144&f=INTRT&r=44843

so I can run a perl program (via a crontab entry) and monitor the current number of seats available for a specific instructor. I have tried using LWP::Simple's 'get' function, but I am receiving a 302 error. Any ideas would would be most appreciated!

Thanks for the help,

Christine

Replies are listed 'Best First'.
Re: Downloading a web page over HTTPS?
by runrig (Abbot) on Nov 26, 2014 at 01:23 UTC
Re: Downloading a web page over HTTPS?
by RonW (Parson) on Nov 26, 2014 at 01:23 UTC

    302 is a redirection response. The response will also contain an URL that will need to be accessed. Looks like LWP::Simple doesn't handle this automatically. You probably need to use another of the LWP modules.

Re: Downloading a web page over HTTPS?
by ikegami (Patriarch) on Nov 26, 2014 at 05:18 UTC

    LWP::Simple uses LWP::UserAgent. LWP::UserAgent normally follows redirects. It won't follow redirects received from POST responses by default, however. Consult the docs for LWP::UserAgent to add POST to the list of methods for which redirects are allowed. (You won't be able to use LWP::Simple if this is the problem.)

Re: Downloading a web page over HTTPS?
by Your Mother (Archbishop) on Nov 26, 2014 at 09:47 UTC

    There is a little bit of JS magic going on when you first visit and get cookied. The actual request to the resource requires no JS but getting through https://webapp4.asu.edu/catalog/Home.ext to obtain a valid cookie seems to. I have the basics worked out with WWW::Mechanize::Firefox but tonight is the first time I ever got it to install so I have little experience with it. I’ll update tomorrow (might be late in the day after work) with code that at least does something on which you can build.

Re: Downloading a web page over HTTPS?
by Your Mother (Archbishop) on Nov 26, 2014 at 23:18 UTC

    *Brittle* but works right now… exercise for the reader to correlate headers with data or dig deeper or adjust scraping. I would really think that if you’re a TA, GA, or prof or whatever that the university would probably install a crontab for you that would be *much* more robust. Something like–

    30 2 * * *  `echo "…SQL statement and math…" | mysql students_db | mail you@your.edu -s "Your cron"`

    –would be pretty trivial on the backend for someone I would think. Certainly easier and more likely to keep working than–

    #!/usr/bin/env perl use strict; use warnings; use WWW::Mechanize::Firefox; use HTML::TableExtract; # Firefox will be in a different place/name for different architecture +s. my $mech = WWW::Mechanize::Firefox->new( activate => 1, autoclose => 1, launch => "/Applications/Firefox.app/Contents/MacOS/firefox" ); $mech->get("https://webapp4.asu.edu/catalog/Home.ext"); eval { my ( $val,$type ) = $mech->eval_in_page(<<'JS'); jQuery(function($){ // Click the ASU campus+online radio button. $("input[name='typeSelection'][value='C']").click(); }); JS }; die $@ if $@; # Get the desired search result page. $mech->get("https://webapp4.asu.edu/catalog/course?s=MAT&n=243&c=DTPHX +&t=2144&f=INTRT&r=44843"); my @headers = ( qr/ Reserved \s+ Available \s+ Seats /x, qr / Students \s+ Enrolled /x, qr/ Total \s+ Seats \s+ Reserved /x, qr/ Reserved \s+ Until /x, ); my $te = HTML::TableExtract->new( headers => \@headers ); $te->parse($mech->content); for my $row ( $te->rows ) { no warnings "uninitialized"; s/\A\s+//g, s/\s+\z//g, s/\s/ /g for @$row; next unless grep length, @$row; print "Scraped info: ", join(',', @$row), "\n"; }
    Scraped info: 10,40,50,n/a

    Reading: WWW::Mechanize::Firefox, HTML::TableExtract, jQuery (note, they are using a positively ancient version right now: 1.2.3).