Re: sockets and such in Perl
by polettix (Vicar) on Jul 03, 2005 at 10:58 UTC
|
Sockets is somewhat vague, anyway all sockets methods I know of are bidirectional. So, first of all, no need to have a couple of them - they're not pipes or fifos! This should half the complexity :)
Second, you probably have to decide who's the server and who's the client. Even if it may seem strange, I'd put the man-in-the-middle as a client, simply because... I forecast it will be simpler to code. A matter of taste and of forecasting powers, anyway.
This issue, tough physically completely independent on the following algorithm, is logically connected. How would you implement it inside a single process, reading from three files? Here you have much the same scenario, only with a bit of control logic to avoid reading more than you need. Again, the man-in-the-middle process will be the master driver, while the other two will act as slaves (i.e. servers).
The first and third process (i.e. the "endpoints"):
Open file
Open socket in server mode
Accept incoming request
Loop:
read a request from the socket
if request is "stop" exit
else read name from file and send back to the socket
The second follows the same logic as a "single-process" computation: each time the process wants to read from one of the "side" file, it sends a fake command (a simple newline will do) to the corresponding far process, then reads the answer as if it were reading from a file. The core algorithm does not need to be changed, as you can see.
Flavio
perl -ple'$_=reverse' <<<ti.xittelop@oivalf
Don't fool yourself.
| [reply] [d/l] |
|
|
my $s_socket = IO::Socket::INET->
new('LocalPort' => $S_PORT,
'Proto' => 'tcp',
'Listen' => 2)
or die "Third: Can't create socket to Second($!)\n";
#then later on...
print $s_socket "$match\n";
print scalar <$s_socket>;
But could not get it to work... am I forgetting something here?
I think I've tried to implement a similar algorithm - I will comment on this later in response to one of the other suggestions made to my original post. Basically, the issue is that there may be false positives (i.e. two files have the same name, but not the third). This forces me to have to return to the process that originated the initial "false match" and resume input from it until I find a match that occurs in all three files... more to come...
Thanks again people... you all are lifesavers... | [reply] [d/l] |
|
|
#!/usr/bin/perl
use warnings;
use strict;
use IO::Socket::INET;
+
# Main socket, it's only for accepting connections
my $sock =
IO::Socket::INET->new(LocalPort => 10000, Proto => 'tcp', List
+en => 2)
or die "unable to initiate socket: $!";
print STDERR "ready to accept connections\n";
+
# Working sockets, they're bidirectional connections towards each
# single client, so they're to be used for actual communications
my $conn1 = $sock->accept() or die "accept(): $!";
print STDERR "client 1 connected\n";
+
my $conn3 = $sock->accept() or die "accept(): $!";
print STDERR "other client connected\n";
+
# Core of algorithm should go here, just some examples of IO given
# Read from client #1
my $rec1 = <$conn1>;
# Send something to client #1
print $conn1 "Hey #1, I heard you saying [$rec1]\n";
# Read from the other client
my $rec3 = <$conn3>;
# Send something to it as well
print $conn3 "You're there, #3... did you say [$rec3]?\n";
$_->close foreach ($conn3, $conn1, $sock);
Flavio
perl -ple'$_=reverse' <<<ti.xittelop@oivalf
Don't fool yourself.
| [reply] [d/l] [select] |
|
|
|
|
Re: sockets and such in Perl
by zentara (Cardinal) on Jul 03, 2005 at 12:06 UTC
|
It sounds like you are looking for Select, to be able to read multiple sockets "simultaneously". You will find that there are all sorts of "extra details" when dealing with multiple sockets. Things like forking-vs-non-forking servers. If you need to transfer big files, a single headed server running with select, will block while it transfers the big file. So you may need a more complicated "forking server", or even a threaded server. Another complication is if you want to echo the data to all clients. Since you havn't shown any code yet, here is a pretty good example of a "non-forking-multi-echoing server using select." The client uses fork. So start the server, and start 3 clients and see how it works. The client dosn't need to fork, you could work out some "protocol" , to indicate when "end-of-send" is reached, and you are switching to receive mode, but you can see it is nicer to use a forked client.
###########SERVER###############
#!/usr/bin/perl
use IO::Socket;
use IO::Select;
my @sockets;
my $machine_addr = '192.168.0.1';
$main_sock = new IO::Socket::INET(LocalAddr=>$machine_addr,
LocalPort=>1200,
Proto=>'tcp',
Listen=>3,
Reuse=>1,
);
die "Could not connect: $!" unless $main_sock;
print "Starting Server\n";
$readable_handles = new IO::Select();
$readable_handles->add($main_sock);
while (1)
{
($new_readable) = IO::Select->select($readable_handles, undef, undef
+, 0);
foreach $sock (@$new_readable)
{
if ($sock == $main_sock)
{
$new_sock = $sock->accept();
$readable_handles->add($new_sock);
}
else
{
$buf = <$sock>;
if ($buf)
{
print "$buf\n";
my @sockets = $readable_handles->can_write();
#print $sock "You sent $buf\n";
foreach my $sck(@sockets){print $sck "$buf\n";}
}
else
{
$readable_handles->remove($sock);
close($sock);
}
}
}
}
print "Terminating Server\n";
close $main_sock;
getc();
__END__
###########CLIENT#########################
#!/usr/bin/perl -w
use strict;
use IO::Socket;
my ( $host, $port, $kidpid, $handle, $line );
( $host, $port ) = ('192.168.0.1',1200);
my $name = shift || '';
if($name eq ''){print "What's your name?\n"}
chomp ($name = <>);
# create a tcp connection to the specified host and port
$handle = IO::Socket::INET->new(
Proto => "tcp",
PeerAddr => $host,
PeerPort => $port
)
or die "can't connect to port $port on $host: $!";
$handle->autoflush(1); # so output gets there right away
print STDERR "[Connected to $host:$port]\n";
# split the program into two processes, identical twins
die "can't fork: $!" unless defined( $kidpid = fork() );
# the if{} block runs only in the parent process
if ($kidpid) {
# copy the socket to standard output
while ( defined( $line = <$handle> ) ) {
print STDOUT $line;
}
kill( "TERM", $kidpid ); # send SIGTERM to child
}
# the else{} block runs only in the child process
else {
# copy standard input to the socket
while ( defined( $line = <STDIN> ) ) {
print $handle "$name->$line";
}
}
__END__
I'm not really a human, but I play one on earth.
flash japh
| [reply] [d/l] |
|
|
I did try using Select - in fact I even started by modifying this very example:)
I had a couple questions in regards to using IO::Select - the first being, how do I select() specifically which client/process I want to speak to, and more importantly, when used in this manner, will the socket/connection be bidirectional (can I both read and write?).
(see ... ??? comments in code snippet below)
#this sets up $main_sock to handle multiple connections...???
$readable_handles = new IO::Select();
$readable_handles->add($main_sock);
#this returns an array of readable (and writeable)
#handles/connections...???
$new_readable) = IO::Select->select($readable_handles, undef, undef+,
+0);
#So now I can iterate through this array just like I would
#for any other, except it is an array of "connections"...
foreach $sock (@$new_readable)
{
#in the case that ($sock == $main_sock) this means that
#I need to listen for & add any new connections to the
#list of $readable_handles, but why not to $new_readable
#as well...???
if ($sock == $main_sock)
{
$new_sock = $sock->accept();
$readable_handles->add($new_sock);
}
else
{
$buf = <$sock>;
if ($buf)
{
#does the line $readable_handles->can_write() mean that
#these handles are not bidirectional...???
print "$buf\n";
my @sockets = $readable_handles->can_write();
#print $sock "You sent $buf\n";
foreach my $sck(@sockets){print $sck "$buf\n";}
}
else
{
$readable_handles->remove($sock);
close($sock);
}
}
}
This may be a useful implementation, but if you gurus out there could give me the play-by-play on how the code above operates it would help me out tremendously..
Thanks again... more to come...
| [reply] [d/l] |
|
|
To specifically choose a socket to write to, you could store the "name" and $sock object in some sort of hash, to keep track of them. Then if you want to only write to name "foo" , you can print to $socks{'foo'}{'socket'}. I used an array for simplicity, but you can loop through hash keys too. One thing you will have to watch out for, when printing only to one client, is getting "out -of-sync" with the other clients. Occasionly I have
seen situations when looping through the hash-of-sockets, I need to print to all clients, BUT change what I print. Like print some text to socks{'foo'}, but $socks{'bar'} and $socks{'baz'} will just get a newline, or some NO-OP type of tag. It's just something to watch out for.If you want a good explanation of how all this works:
select
client
server
After reading the above links, you will see that the modules IO::Socket and IO::Select take care of alot of the details for you.
I'm not really a human, but I play one on earth.
flash japh
| [reply] |
Re: sockets and such in Perl
by nothingmuch (Priest) on Jul 03, 2005 at 16:02 UTC
|
People have mentioned select.
I think the issue you are sufferring from is that you want nonblocking IO, and you have blocking IO.
The first thing to do is to read MJD's article Suffering from Buffering?.
Once you get a handle on the issues involved, look into setting nonblocking IO modes on the filehandles of the sockets. This is an unwieldly task to get right - it takes a lot of fidgeting and fumbling, but eventually you start getting a feel for what's happenning.
Once you think you can solve it with nonblocking IO and select - stop. Don't do it. It's been done. Look into Event which can help you get callbacks for each named passed, and will allow you to read from the input of the file.
I think I also have a simpler solution, without any of these issues.
- The first process in the pipeline is the simplest - it reads a file and prints it to a socket
- The second and third processes are the same
- They read a line from the socket, and then read lines from the file
- They stop if the line read from the file handle is equal to the line read on the socket, in which case the line is printed to it's printing socket
- or the next line is alphabetically after the line read from the socket, in which case the next line is read from the socket
- The third process will simply stop when it finds a match, instead of rereading
In a unix pipeline this should be something like:
perl -pe1 file | \
perl -e 'open my $fh, "<", shift; my $line; FH: { $line = <$fh>; my $i
+n; { $in = <STDIN>; redo if $in lt $line }; print $line if $line eq $
+in; redo }' | \
perl -e 'open my $fh, "<", shift; my $line; FH: { $line = <$fh>; my $i
+n; { $in = <STDIN>; redo if $in lt $line }; if ($line eq $in) { print
+ $line; exit }; redo }'
To translate to socketspace all you need is to replace STDIN/STDOUT with sockets opened to the right place.
Note that the code is untested.
| [reply] [d/l] |
|
|
The one catch, as I alluded to in my respose to Frodo, is that there is the possibility of a "false" match (i.e. a match in two of the files, but not all three), hence even if the first two processes on the pipeline find a match, I can't guarantee it wil be repeated with the third process.
My problems up until this point have really centered around this one fact, returning control to a process earlier on the pipeline and repeating the cycle of: if(match)then pass on to next process... this is complicated by the fact that the Eclipse/EPIC debugger craps out on me when I try and step through the code of any one of these processes (with the other two running in the background) and I can't seem to get my STDOUT print statements to work reliably once I've opened a socket for writing.
Can anyone give me an example of how to use a single socket in a bidirectional manner?
For instance, if I set a socket up to receive:
my $s_socket = IO::Socket::INET->
new('LocalPort' => $S_PORT,
'Proto' => 'tcp',
'Listen' => 2)
or die "Third: Can't create socket to Second($!)\n";
while (my $second = $s_socket->accept) {...}
but then later on need to send over that same socket:
while(<$second>)
{
if($_ eq $match) {
print $s_socket "$match\n";
scalar <$s_socket>;
last;
}
}
Am I doing this in the correct manner - what steps am I forgetting? In a nutshell the statement above (both sending and on the receiving process end) is what has been giving me the most headache. Any help would be much appreciated! | [reply] [d/l] [select] |
|
|
Think of it this way... Process 2 only prints out lines which it knows that both process 1 and process 2 have. This is then relayed to process 3. So process 3 has a list of lines, that have all been known to match at least in the other two processes, and a list of other lines. The moment it finds a line that equals - it's done - we know it's in all 3 lists.
The pipelining structure basically filters anything that is definately not a match in the transition from 1 to 2. Then it filters out anything that is definately not a match between 2 and 3. The first thing that results is the answer.
For an example of a bidirectional TCP/IP type thing, without the IO::Socket interface (but you should be able to cope), see perlipc.
But I stress that bidirectionality is not necessary =)
| [reply] |
|
|
|
|
Re: sockets and such in Perl
by neniro (Priest) on Jul 03, 2005 at 10:31 UTC
|
Isn't it easier to use threads instead of processes for this purpose, cause you can access the same variables? I haven't used either of them but i mean that was one of the advantages about using threads? | [reply] |
|
|
| [reply] |
Re: sockets and such in Perl
by spurperl (Priest) on Jul 03, 2005 at 16:26 UTC
|
A common pattern in multi process/thread design you could use to simplify your life here is the "manager-worker model". Shortly, one process acts as a manager and "spawns" workers as needed. Each worker knows only of its manager - through a single socket. The worker's task is simple - it gets a job from the manager, does it and reports back. The manager handles the coordination between workers and distributes jobs for them.
| [reply] |
Re: sockets and such in Perl
by CountZero (Bishop) on Jul 03, 2005 at 20:04 UTC
|
I would set up a central database and each process saves name by name in that database, each in their own table.After saving a name, you do a select on the three tables joined for names which are in the three tables, results ordered alfabetically. The top one is your answer. Once you get an answer the programs report and stop.
CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law
| [reply] |
Re: sockets and such in Perl
by raafschild (Novice) on Jul 04, 2005 at 00:47 UTC
|
For this problem I would use the following algorithm:
read the first names from each file in name1, name2 and name3
while not (name1==name2 and name2==name3) {
determine from which file came the name with the lowest alphabetica
+l value
read the next name from that file in the corresponding name1, name2
+, name3 variable
}
print name1, name2, name3
The nice thing about this algorithm is that it doesn't matter if you use files, pipes or sockets.
For the socket solution you would create three server processes and one client process. The client opens a socket to each of the servers. Each server process reads one of the files and writes the names one by one to the socket.
The perlipc man page shows several examples of clients and a server using IO::Socket. For the client you should not have to do anything more complex than the Simple client. The server can be simpler than the example shown; just remove all the prompts and command interpretation, and open the file and write the names to the client.
HTH | [reply] [d/l] |