Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

This works:
use WWW::Curl::Easy; my $curl = WWW::Curl::Easy->new; $curl->setopt(CURLOPT_HEADER,0); $curl->setopt(CURLOPT_TIMEOUT, 30); $curl->setopt(CURLOPT_FOLLOWLOCATION, 1); $curl->setopt(CURLOPT_URL, 'http://example.com'); my $response_body = ''; open(my $fileb, ">", \$response_body); $curl->setopt(CURLOPT_WRITEDATA,$fileb); my $retcode = $curl->perform; if ($retcode == 0) { print("Received response: $response_body\n"); } else { print("An error happened: $retcode ".$curl->strerror($retcode) +." ".$curl->errbuf."\n"); }


This failed with "An error happened: 3 URL using bad/illegal format or missing URL No URL set!":
require WWW::Curl::Easy; import WWW::Curl::Easy; my $curl = WWW::Curl::Easy->new; $curl->setopt(CURLOPT_HEADER,0); $curl->setopt(CURLOPT_TIMEOUT, 30); $curl->setopt(CURLOPT_FOLLOWLOCATION, 1); $curl->setopt(CURLOPT_URL, 'http://example.com'); my $response_body = ''; open(my $fileb, ">", \$response_body); $curl->setopt(CURLOPT_WRITEDATA,$fileb); my $retcode = $curl->perform; if ($retcode == 0) { print("Received response: $response_body\n"); } else { print("An error happened: $retcode ".$curl->strerror($retcode) +." ".$curl->errbuf."\n"); }

why? I really need to use 'require' instead of 'use', please help.

Replies are listed 'Best First'.
Re: why this couldn't work?
by Corion (Patriarch) on Jul 01, 2011 at 16:46 UTC

    I really doubt that CURLOPT_HEADER has the value you want, in your second example. If you had used strict and/or warnings, Perl would have told you so:

    use strict; require WWW::Curl::Easy; import WWW::Curl::Easy; my $curl = WWW::Curl::Easy->new; $curl->setopt(CURLOPT_HEADER,0); # boom

    You will either need to fully qualify the names of the constants, or import them before your code gets compiled:

    use strict; require WWW::Curl::Easy; import WWW::Curl::Easy; my $curl = WWW::Curl::Easy->new; $curl->setopt(WWW::Curl::Easy::CURLOPT_HEADER,0); # or wherever CURLOP +T_HEADER gets declared
Re: why this couldn't work?
by Anonymous Monk on Jul 01, 2011 at 16:47 UTC

    Who defines CURLOPT_FOLLOWLOCATION?

    See require, use, and import

    When you use WWW::Curl::Easy;, it exports constants like CURLOPT_FOLLOWLOCATION

    They are still accessible via WWW::Curl::CURLOPT_FOLLOWLOCATION or some such

    FWIW, Re: why this couldn't work? is not an effective node title

      both WWW::Curl::Easy::CURLOPT_HEADER or WWW::Curl::CURLOPT_HEADER couldn't work. I'm not sure where they got defined, may be it's inside the xs library? Any ideas? (btw, sorry for the node title, didn't know that would be a problem)

        If it is defined in the XS library, it will be available just the same as if it were defined in Perl space. I'm not sure what exactly you tested, and whether you get errors now, and what errors you get when using strict. It seems that WWW::Curl::Easy does something fancy with AUTLOAD, so you might want to inspect %WWW::Curl:: for interesting constant names.

        A helpful node title would include "WWW::Curl", btw, and describe the problem, instead of asking a general question that could just as well apply to growing tomatoes on the moon.

        I tried both, came up with this:
        Bareword "WWW::Curl::CURLOPT_HEADER" not allowed while "strict subs" in use at t.cgi line 16.
        Bareword "WWW::Curl::Easy::CURLOPT_TIMEOUT" not allowed while "strict subs" in use at t.cgi line 17.
Re: why this couldn't work?
by PerlAddict42 (Novice) on Mar 02, 2017 at 08:19 UTC

    Writing scripts that need to run on over 30000 different systems I ran into the same issue many times: the constants are not available when using 'require' or 'eval' to load WWW::Curl::Easy.

    This morning I checked the sources. As the documentation states, WWW::Curl::Easy is just a thin layer around libcurl, so everything is autoloaded from the XS library:

    $curl->setopt($curl->CURLOPT_HEADER,0); $curl->setopt($curl->CURLOPT_TIMEOUT, 30); $curl->setopt($curl->CURLOPT_FOLLOWLOCATION, 1); $curl->setopt($curl->CURLOPT_URL, 'http://example.com');

    Wish I had checked this years ago ... never too old to learn :-)

Re: why this couldn't work?
by Anonymous Monk on Dec 18, 2013 at 19:55 UTC
    getting the following error Can someone help Bareword "WWW::Curl::Easy::CURLOPT_URL" not allowed while "strict subs" in use at refresh.pl line 85. Bareword "WWW::Curl::Easy::CURLOPT_WRITEDATA" not allowed while "strict subs" in use at refresh.pl line 88. Bareword "CURLINFO_HTTP_CODE" not allowed while "strict subs" in use at refresh.pl line 96. Execution of refresh.pl aborted due to compilation errors.