in reply to retrieve html code in reverse from remote server

Is there a way I can just download the last 20 lines from the site without having to download the whole page using perl to make for faster download time?

Not directly, but there is a way to download only the last certain number of bytes, assuming that the server on the other end supports it. Note that if the server on the other end doesn't support it, you'll just end up downloading the whole thing anyway. For example, this gets the last 500 bytes of a page:

use LWP::UserAgent; my $ua = LWP::UserAgent->new; $ua->timeout(10); $ua->env_proxy; my $url = 'http://cpan.uwinnipeg.ca/htdocs/libwww-perl/LWP/UserAgent.h +tml'; my $response = $ua->get($url, Range => 'bytes=-500'); print $response->content;

So what I would recommend is that you get an estimate of the number of bytes in the last 20 lines, double it, and then do something like: (assuming that the doubled estimate was, say, 3500)

use LWP::UserAgent; use strict; my $ua = LWP::UserAgent->new; $ua->timeout(10); $ua->env_proxy; my $url = 'http://xx.xx.xx.xx/xx.xx.xx.xx/pbsvss.htm'; my $response = $ua->get($url, Range => 'bytes=-3500'); my @lines = split(/\n/, $response->content); my @lastlines = @lines[-20 .. -1]; print "last lines: \n\n"; print @lastlines;
--
@/=map{[/./g]}qw/.h_nJ Xapou cets krht ele_ r_ra/; map{y/X_/\n /;print}map{pop@$_}@/for@/