hooel has asked for the wisdom of the Perl Monks concerning the following question:

Hello!

I'm having a problem with downloading XML from site. I'm having something like doi address from science articles, that redirect to site with article description. Then I've got easy form in html which after writing doi address and clicking accept send You to the feature which should download XML and then parsing to get some info. And this is fragment of code which should download and write xml to the file:

#the whole process of getting xml from site and giving it into + the string variable #my $dioaddress = "http://dx.doi.org/10.1016/j.nuclphysa.2015. +05.005"; # my $reqforxml = new HTTP::Request GET => $addressdio; # my $res = $userag->request($reqforxml); # my $content = $res->content; my $content = get("$addressdio"); # my $html = get("http://dx.doi.org/10.1007/s00601-015-1012-x") # or die "Couldn't fetch the Perl Cookbook's page."; # print "$html"; #opening the file to write string with xml open my $overwrite, '>', 'overwrite.xml' or die "error trying +to overwrite: $!"; #writing string with xml to the file say $overwrite "$content";

Commented lines with "HTTP::Request" works the same as this uncommented lines

When doi was redirecting to Science Direct article it worked cool, but after having doi which redirect to Link Springer the XML which i'm having is not full like this:

<!DOCTYPE html> <!--[if lt IE 7]> <html lang="en" class="no-js ie6 lt-ie9 lt-ie8"> <![ +endif]--> <!--[if IE 7]> <html lang="en" class="no-js ie7 lt-ie9 lt-ie8"> <![ +endif]--> <!--[if IE 8]> <html lang="en" class="no-js ie8 lt-ie9"> <![endif]- +-> <!--[if IE 9]> <html lang="en" class="no-js ie9"> <![endif]--> <!--[if gt IE 9]><!--> <html lang="en" class="no-js"

Or sometimes something like:

<!DOCTYPE html> <!--[if lt IE 7]> <html lang="en" class="no-js ie6 lt-ie9 lt-ie8"> <![ +endif]--> <!--[if IE 7]> <html lang="en" class="no-js ie7 lt-ie9 lt-ie8"> <![ +endif]--> <!--[if IE 8]> <html lang="en" class="no-js ie8 lt-ie9"> <![endif]- +-> <!--[if IE 9]> <html lang="en" class="no-js ie9"> <![endif]--> <!--[if gt IE 9]><!--> <html lang="en" class="no-js"> <!--<![endif]--> <head> <meta charset="UTF-8"/> <meta name="description" content=""/> <meta name="author" content=""/> <meta name="viewport" content="width=device-width, minimum-scale=1, +maximum-scale=1"/> <meta name="format-detection" content="telephone=no"/> <meta name="citation_publisher" content="Springer Vienna"/> <meta name="citation_title" content="Adiabatic Hyperspherical Analys +is of Realistic Nuclear Potentials"/> <meta name="citation_firstpage" content="1"/> <meta name="citation_lastpage" content="7"/> <meta name="citation_doi" content="10.1007/s00601-015-1012-x"/> <meta name="citation_language" content="en"/> <meta name="citation_abstract_html_url" content="http://link.springe +r.com/article/10.1007/s00601-015-1012-x"/> <meta name="citation_pdf_url" content="http://link.springer.com/cont +ent/pdf/10.1007%2Fs00601-015-1012-x.pdf"/> <meta name="citation_springer_api_url" content="http://api.springer. +com/metadata/pam?q=doi:SpringerId(10.1007/s00601-015-1012-x)&amp;api_ +key="/> <meta name="citation_author" content="K. M. Daily"/> <meta name="citation_author_institution" content="Purdue University" +/> <meta name="citation_author_email" content="daily5@purdue.edu"/> <meta name="citation_author" content="Alejandro Kievsky"/> <meta name="citation_author_institution" content="Instituto Nazional +e di Fisica Nucleare"/> <meta name="citation_author" content="Chris H. Greene"/> <meta name="citation_author_institution" content="Purdue University" +/> <meta name="citation_journal_title" content="Few-Body Systems"/> <meta name="citation_journal_abbrev" content="Few-Body Syst"/> <meta name="citation_issn" content="0177-7963"/> <meta name="citation_issn" content="1432-5411"/> <meta name="citation_online_date" content="2015/06/26"/> <m

Like You can see, it stops getting xml in one moment :/. Can someone tell me what is wrong, or what I'm doing wrong? Like I said Doi redirected to Science Direct worked pretty well

Replies are listed 'Best First'.
Re: Getting XML from DOI address
by Anonymous Monk on Jul 15, 2015 at 23:20 UTC

    Like You can see, it stops getting xml in one moment :/. Can someone tell me what is wrong, or what I'm doing wrong? Like I said Doi redirected to Science Direct worked pretty well

    You're not reading the english words you get :) I get ScienceDirect does not support the use of the crawler software. If you have any questions please contact your helpdesk.

    That is kinda self explanatory

      Ok, so this is the whole code:

      #!"E:\xamp\perl\bin\perl.exe" -T use 5.010; use CGI; use strict; use warnings; use LWP::UserAgent; use LWP::Simple; use HTML::TreeBuilder; my $q = CGI->new(); my $userag = LWP::UserAgent->new(timeout=>30); #it's illegal :( + p.s. to make what they say, delete argument from "new" +"agent => 'MyApp/0.1'" say $q->header(), $q->start_html(); my $addressdio = ""; #getting the address from form for my $param ($q->param()) { my $safe_param = $q->escapeHTML($param); say "<p><strong>$safe_param</strong>: "; for my $value ($q->param($param)) { say $q->escapeHTML($value); $addressdio = $q->escapeHTML($value); } say '</p>'; } #the whole process of getting xml from site and giving it into + the string variable #my $dioaddress = "http://dx.doi.org/10.1016/j.nuclphysa.2015. +05.005"; my $reqforxml = new HTTP::Request GET => $addressdio; my $res = $userag->request($reqforxml); my $content = $res->content; # my $content = get("$addressdio"); # my $html = get("http://dx.doi.org/10.1007/s00601-015-1012-x") # or die "Couldn't fetch the Perl Cookbook's page."; # print "$html"; #opening the file to write string with xml open my $overwrite, '>', 'overwrite.xml' or die "error trying +to overwrite: $!"; #writing string with xml to the file say $overwrite "$content"; #A little system just to get the title #($title) = $content =~ <h1 class="svTitle" id="ti0010">(.*?)< +/h1>; #print "Title of article: $title"; #my $tree = HTML::TreeBuilder->new; #$tree ->parse_file("overwrite.txt"); #foreach my $h1 ($tree->find('h1')){ #print $h1->as_text, "<br />"; #} close $overwrite; say "<h1>And here's the site:</h1>"; print "$content"; #string with our site say $q->end_html();
      I got the XML from ScienceDirect thanks to this: agent => 'MyApp/0.1', but even when I give this paramater to agent while connecting to Link Springer, then the situation is the same like I explained in the first post.

        I got the XML from ScienceDirect thanks to this: agent => 'MyApp/0.1', but even when I give this paramater to agent while connecting to Link Springer, then the situation is the same like I explained in the first post.

        Yes, and then what happened?

        Its like you order a drink from a bartender while handing over some pesos. Bartender only response is we don't take pesos

        The website is telling you "i don't like that"

        I'm of the opinion, that if a website does that, and you can't figure out a way around it -- well, you should listen to the website

Re: Getting XML from DOI address
by choroba (Cardinal) on Jul 15, 2015 at 16:51 UTC
    Do you close $overwrite?
    لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
      Actually - I didn't, but now I did, and it didn't help.
Re: Getting XML from DOI address
by GotToBTru (Prior) on Jul 15, 2015 at 18:17 UTC

    get is not a defined function in Perl. What module are you using?

    Dum Spiro Spero