in reply to method not supported

Before we try to get clever and diagnose the problem, we probably ought to see a minimal script that generates the error. It should be possible to boil this type of problem down to ten lines or less. If the problem goes away, you'll be half-way toward isolating the issue on your own. If it doesn't go away, you'll have a great working example for us to look at for you.


Dave

Replies are listed 'Best First'.
Re^2: method not supported
by aquarium (Curate) on Dec 28, 2005 at 05:48 UTC
    what URL are you using for the "other" dictionary with lwp::useragent?
    the hardest line to type correctly is: stty erase ^H
Re: method not supported
by Anonymous Monk on Dec 28, 2005 at 07:31 UTC

    Here's the code ...


    my $agent = new LWP::UserAgent; my $define = new HTTP::Request; $define->method('get'); $define->url($url); my $website = $agent->request($define); my $file = $website->content;

    from there $file gets fed into a parser ... but to test dictionaries I just print $file."\n";

    www.dictionary.com works fine ... $url="http://dictionary.reference.com/search?q=".$word;

    gives the html that can be parsed ...

    but www.naver.com doesn't ... $url="http://dic.naver.com/search.naver?mode=all&query=".$word;

    where $word="happy"; gives me this html ...


    <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <HTML><HEAD> <TITLE>501 Method Not Implemented</TITLE> </HEAD><BODY> <H1>Method Not Implemented</H1> get to /search.naver not supported.<P> Invalid method in request get /search.naver?mode=all&amp;query=happy H +TTP/1.1<P> </BODY></HTML>

    I tried 'post' and read thru the cpan docs for the module but can't grab webpages from the site ... which is annoying since the xml tags at naver make parsing alot easier ...

      Well, I don't see the problem, and testing and tinkering with it I couldn't get your code to work. I did try assigning an agent name to make it look like the request was coming from Firefox instead of LWP::UserAgent, just in case the server is blocking robots.

      I did re-write the code, to more closely match the synopsis given in the docs for LWP::UserAgent, and that seemed to do the trick:

      use strict; use LWP::UserAgent; my $word = "happy"; my $url = "http://dic.naver.com/search.naver?mode=all&query=" . $word; my $agent = new LWP::UserAgent; $agent->agent('Firefox/1.5'); my $response = $agent->get( $url ); if( $response->is_success ) { print $response->content(); } else { die $response->status_line; }

      See if you can adapt that to your needs, because it seemed to work fine for me.


      Dave

        Yes, your correction works perfectly ... the difference seems to be this line ...

        $agent->agent('Firefox/1.5');

        this seems to be required for some sites but not others ... and I missed it somehow when I was looking at docs ...

        as for my potentially heinious intentions (LOL) - I'm a foreign language instructor in Korea - I use a copy of Boutell's PerlMud 3.0 with a POE skeleton running a bunch of mud bots as part of my cirr. ... a very minor part of the bots that I'm working on is the ability to define words, offer spelling suggestions, syn and antonyms - that kind of thing ...

        the students aren't allowed to surf so unless the bots provide it they're SOL ... I stumbled on my problem when I switched online dictionaries - found one with more understandable definitions for foreign language students ... and better xml for parsing ...

        Thanks