I don't have a blog. When I have something to write, it pretty much ends up on Perlmonks. This is (as the title suggests) more story than meditation. Maybe what I've uncovered is obvious, maybe not. I just hope it will help someone out.

I'm developing for the Solaris (Sparc) OS. Mostly things are the same, but sometimes they aren't. When I was having problems getting https://localhost/ to work, I tried using openssl to get in:

openssl s_client -connect localhost:443
No dice -- got an error right away. Eventually, I discovered that
openssl s_client -connect 10.1.1.161:443
worked instead. I'm sure someone can explain that -- I can't.

Next, I was trying to run a test script to hit the local webserver. Since I'd already discovered localhost wouldn't do that, I was using the IP address, which worked fine until there was a re-direct, at which point things just died quietly (i.e., nothing obvious in the web logs).

So then I tried using a browser (with the IP address, not the name) to check that things worked, and discovered that Firefox complained that the SSL certificate I had installed on this (virtual) box didn't match the IP address I was trying to use. So I put the name of the host into my test script, and was finally able to log in.

So my test script is fairly simple: try to log in as a variety of different users, each with a different set of privileges, and confirm that the user can see the links that they're allowed to see, and if the link does exist, try to follow it, making sure that a valid page (HTTP code is 200) gets returned.

Since some of the links belong to packages that aren't installed yet, I expect some of them to fail, so that part of the code is inside a SKIP block. This worked fine when I ran the script from my Ubuntu box, hitting the Solaris box, but now I'm running from within Solaris, as soon as a non-existent page is requested, the test script dies, and I'm not sure why.

The links are in a table, so I've just commented those elements out, and now my test script is passing OK -- I guess I need to go back to the docs and read them again.

Sometimes development is maddening like that.

Update: I've started a SoPW question to follow up on my last point.

Alex / talexb / Toronto

"Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

Replies are listed 'Best First'.
Re: Stories from the front
by zentara (Cardinal) on Jan 03, 2009 at 14:34 UTC
    openssl s_client -connect localhost:443 No dice -- got an error right away. Eventually, I discovered that openssl s_client -connect 10.1.1.161:443 worked instead. I'm sure someone can explain that -- I can't.

    I don't know about Solaris, but on linux, localhost is defined in /etc/hosts, and is normally assigned 127.0.0.1. You might try adding a localhost entry for 10.1.1.161 to /etc/hosts, or whatever file Solaris uses.


    I'm not really a human, but I play one on earth Remember How Lucky You Are

      My first thought in reading the OP was that maybe there might be only a VirtualHost entry using port 443 defined for the 10. address. One thing that might provide a clue would be to see if netstat (sorry, but I don't recall the appropriate options on Solaris) shows it to be listening on port 443, or only on port 443 on the 10. ip address.

      Hope that helps.

        You might try adding a localhost entry for 10.1.1.161 to /etc/hosts, or whatever file Solaris uses.

      Solaris does use /etc/hosts -- I may have to check that file and see what it contains. I do know that from both Linux and openSolaris (I'm at home and can't try it on Solaris right now), I'm able to ssh into localhost, 127.0.0.1 and $localIP. And my openSolaris box has

      ::1 localhost 127.0.0.1 foobar foobar.local localhost loghost
      in its /etc/hosts file. Thanks for the feedback.

      Alex / talexb / Toronto

      "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

        Another thing to mull over, is that localhost is called a loopback device. So besides having localhost linked to your 10.... address in /etc/hosts, you have have to setup the network configuration to add the extra loopback device to what it listens to. Google for "ifconfig add loopback" for how to do it. You might need to add a route to the new loopback device, like " /sbin/route add -net 10.0.0.0" (untested) or something like
        ifconfig lo:1 10.0.0.1 route add -host 10.0.0.1 lo:1
        then when you do an ifconfig, you will see lo and lo:1 listed as loopback. But Solaris or the linux distro may add the route automatically at boot, or re-init'ing the network. So if in doubt, reboot after editing the /etc/hosts, unless you know how to restart the network on Solaris.

        And also, as atcroft suggested, you need to check the httpd server configuration file for the https request, to see that 10.0.0.1 is configured.


        I'm not really a human, but I play one on earth Remember How Lucky You Are
Re: Stories from the front
by missingthepoint (Friar) on Jan 04, 2009 at 08:31 UTC
    making sure that a valid page (HTTP code is 200) gets returned

    Unfortunately, you can't rely on a response code of 200 always indicating success - some web servers (I've heard) return that code even for a failed request. I think (I'm open to correction) your best bet is to check that the returned data is 'good' in some way, e.g.

    my $mech = WWW::Mechanize->new; $mech->get( $page ); is( $mech->title(), "Alex's Wacky Widgets", "page GET success" );

    or, with Test::WWW::Mechanize, which you might like to check out:

    $mech->title_is( "Alex's Wacky Widgets" );

    Life is denied by lack of attention,
    whether it be to cleaning windows
    or trying to write a masterpiece...
    -- Nadia Boulanger
        Unfortunately, you can't rely on a response code of 200 always indicating success - some web servers (I've heard) return that code even for a failed request.

      It turns out that I'm testing a web application that I'm maintaining -- so the application is well behaved in that regard. I check that the status was OK, and also check that the content is what I was expecting.

      And I expect to get a 200 Status even on a page that returns something like 'User not found' -- that isn't a protocol error, it's an application error. I also expect to get a 404 on a 'page not found' error, but for pages that are part of a package that may not be installed, that's OK. What I think you're describing is conflating an application error with a protocol error, which I don't think is right.

      And now I know to read the part of the module deltas that say something like "This may break your code" much more closely. :)

      Alex / talexb / Toronto

      "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

      If your web server gives 200 as a response when the page is not present or the user is not authorized, then you need to use a different web server package. There are standards for these things for a reason. As talexb said already, don't confuse application conditions with protocol conditions.

        You are, of course, both correct... If you distinguish between application errors and protocol errors (talexb), then the solution to inappropriate response codes (protocol errors) is indeed to get a new web server (mr_mischief).

        Also, I shouldn't have passed on second-hand advice when I don't know the source. Sorry.


        Life is denied by lack of attention,
        whether it be to cleaning windows
        or trying to write a masterpiece...
        -- Nadia Boulanger