in reply to itterator variable in a LWP-UA-code snippet

It was cold and I double clicked the create button, and it made 2 ! :-)

If I understand your question correctly, the easiest way to do it is to make the $url a separate variable, instead as part of the GET. Also, I don't know what you are try to accomplish if you don't want the content. Are you just tring to detect whether the server is up? There are easier ways to do that.

for my $i (0..10000) { my $url = sprintf("http://dms-schule.bildung.hessen.de/suchen/suche_ +schul_db.html?show_school=5503,%d", $i); my $request = HTTP::Request->new(GET => $url ); $request->header('Accept' => 'text/html'); my $response = $ua->request($request); if ($response->is_success) { $pagecontent = $response -> content; } my $request = POST $url, # check the outcome if ($res->is_success) { print "Success $url \n"; # print out all the URLS that were +fetched! } else { print "Error: $url " . $res->status_line . "\n"; } } # end of for $i loop

I'm not really a human, but I play one on earth.
Old Perl Programmer Haiku ................... flash japh
  • Comment on Duplicate: please delete Re: iitterator variable in a LWP-UA-code snippet
  • Download Code

Replies are listed 'Best First'.
Re: Duplicate: please delete Re: iitterator variable in a LWP-UA-code snippet
by Gavin (Archbishop) on Oct 31, 2010 at 16:27 UTC

    Hee He

    "It was cold and I double clicked the create button, and it made 2 ! :-)"

    Only you could get away with that and not loose XP/Face!

      I think I found the holy grail of how to double my xp per post, just by double clicking. :-)

      I'm not really a human, but I play one on earth.
      Old Perl Programmer Haiku ................... flash japh
Re: Duplicate: please delete Re: iitterator variable in a LWP-UA-code snippet
by Perlbeginner1 (Scribe) on Oct 31, 2010 at 12:38 UTC
    hello zentara hello all!- many thanks for the quick reply.

    your answers are very very helpful and inspring! really!


    of course i want to have the content. But i have to get prepared for this "job". I want to parse the content of all the pages note - there are some with empty results!! since we itterate over many pages.

    note i want to run over a bunch of sites .... some are empty some not..

    see the loop over Hessen:
    http://dms-schule.bildung.hessen.de/suchen/suche_schul_db.html?show_school=5503
    http://dms-schule.bildung.hessen.de/suchen/suche_schul_db.html?show_school=5504
    http://dms-schule.bildung.hessen.de/suchen/suche_schul_db.html?show_school=5505
    http://dms-schule.bildung.hessen.de/suchen/suche_schul_db.html?show_school=5514
    etc

    i look for the data that is in the


    with that information i want to force the parcer - probably i do it with HTML::TreeBuilder::XPath - to get the data out of the sites.

    And finally i want to store it into a database.

    but - i muse obut the idea of also using HTTP::Request::Common; what do you think. It can make all things easier - doesnŽ it!?

    look forward to hear from you!!

      Again you've chosen to ignore the fomratting advice given when posting, I've mentioned this to you a couple of times. Honestly, it wont take long to read and learn this.

      You've also been asking questions similar to this for quite some time, and have been provided several solutions and code to get you going. I understand you are trying to get a working solution for this task. Which parts exactly are you having problems with? Looping? If so see Recursion: The Towers of Hanoi problem from the Subroutines sub section of tutorials.

      You mention you want to run this for several sites which I presume have different markup. Why not just call a different parsing subroutine for each site? I'm sure you've mentioned at least a couple of different sites you wish to parse in your previous posts.

      A reply falls below the community's threshold of quality. You may see it by logging in.