Your approach does not work because you have structured your program in a way that it collects all information before it starts printing the output.
The standard approach for running long processes is Watching Long Processes Through CGI, which you can adapt to your needs by launching an external program which writes to a file.
| [reply] |
#!/usr/bin/perl
use strict;
$|++;
use CGI qw(:all); # import shortcuts
use Fcntl qw(:flock); # imports LOCK_EX, LOCK_SH, LOCK_NB
use CGI::Carp qw(warningsToBrowser fatalsToBrowser); # For Debugging
use Net::SSH::Perl;
use CGI qw(:all delete_all escapeHTML);
print header;
my ($TITLE,$DeploySvr,$KIT,$ssh,$UNM,$Pass,$stdout,$stderr,$exit,$cmd,
+$session,$cache,$data,$pid,);
if (my $session = param('session'))
{ # returning to pick up session data
$cache = get_cache_handle();
$data = $cache->get($session);
unless ($data and ref $data eq "ARRAY")
{ # something is wrong
exit 0;
}
print start_html(-title => "Logging...",
($data->[0] ? () :
(-head => ["<meta http-equiv=refresh content=5
+>"])));
print h1("Logging...");
print pre(escapeHTML($data->[1]));
print p(i("... continuing ...")) unless $data-
+>[0];
print end_html;
}
else {ExecuteProcess();}
sub ExecuteProcess
{
$session = get_session_id();
$cache = get_cache_handle();
$cache->set($session, [0, ""]); # no data yet
$DeploySvr=param('DrpServer');
$KIT=param('TxtKit');
$UNM=param('username');
$Pass=param('password');
if ($pid = fork)
{ # parent does
delete_all(); # clear parameters
param('session', $session);
print redirect(self_url());
}
elsif (defined $pid)
{ # child does
close STDOUT; # so parent can go o
+n
$ssh = Net::SSH::Perl->new('113.128.122.27');
$ssh->login($UNM, $Pass);
$cmd="ls -l";
my($stdout, $stderr, $exit) = $ssh->cmd($cmd);
my $buf = "";
while ($stdout)
{
$buf .= $_;
$cache->set($session, [0, $buf
+]);
}
$cache->set($session, [1, $buf]);
exit 0;
}
else
{
die "Cannot fork: $!";
}
}
sub get_cache_handle
{
require Cache::FileCache;
Cache::FileCache->new
({
namespace => 'LogOutput',
username => 'nobody',
default_expires_in => '30 minutes',
auto_purge_interval => '4 hours',
})
}
sub get_session_id
{
require Digest::MD5;
Digest::MD5::md5_hex(Digest::MD5::md5_hex(time().{}.rand().$$));
}
In the above code , i tried to fork the process.
This CGI gets information from another HTML page which posts info into this.basically it connects to a remote box using SSH module using the username and password provided from the HTML page and lists the output in the web page.
On execution , it fork's the process , but unable to display the messages from the forked command.Also, the web page seems like waiting, so I think that Apache has not received the end signal.
Please review and let me know | [reply] [d/l] |
# Original code:
=42= my $buf = "";
=43= while (<F>) {
=44= $buf .= $_;
=45= $cache->set($session, [0, $buf]);
=46= }
=47= $cache->set($session, [1, $buf]);
=48= exit 0;
Your code does not read from the STDOUT of the SSH process. In fact, I don't think you understood how Net::SSH works at all. I think it directly returns the output of the command after the command has run, so using Net::SSH won't help you, at least in the way you've used it here:
$cmd="ls -l";
my($stdout, $stderr, $exit) = $ssh->cmd($cmd);
my $buf = "";
while ($stdout)
{
$buf .= $_;
$cache->set($session, [0, $buf
+]);
}
$cache->set($session, [1, $buf]);
Please review your code and the documentation for Net::SSH and consider how they can be used for your program. | [reply] [d/l] [select] |
You can use "Client Pull" or "meta refresh".
include a directive like this in the HTML
generated:
<HEAD>
<META HTTP-EQUIV="Refresh" CONTENT="2">
<TITLE>Page</TITLE>
</HEAD>
In this example the page to reload is the current page
since no URL attribute has been specified.
Hope it helps
Casiano
| [reply] [d/l] [select] |
Slightly OT... If you surround your code with <code> and </code>, it would be easier for us to read your script. | [reply] |