tweetiepooh has asked for the wisdom of the Perl Monks concerning the following question:

We have a system that collects data from a remote port and processes collected data into a database.

Problem : the port only allows one connection so if we want to do testing we have to either work on the live system or switch off the live system. Neither is ideal.

Solution : duplicate the remote port locally. Write some code that makes the connection and then creates local ports from which the current system can read. Initially we simply duplicate the feed to two local ports, one for live the second for testing. The writer needs to cope with the client not being there and not blocking if nothing is reading.

Eventually maybe work on a forking server so we can have any number of clients connect and get the feed. This feed is "streamed". The data is ascii and not too high in volume. Speed is mostly not an issue.

Q1)Is this something best done in Perl? I am looking at the fwdport code in Perl Cookbook as a modle or is there a better way using free utils.

Q2)Is the 2 fixed ports the way to start or just as easy to write forking server type code from start?

Q3)Is the fwdport idea the way to go or is there a better way?

In all these I know there is another way, there always is in Perl. I'm not looking for total solutions, pointers to where to look would be a great help. How to ask Google when words like "port" return so many returns? I don't want to "port" perl to the Xbox or other topics like that.

Replies are listed 'Best First'.
Re: Port duplicator
by Corion (Patriarch) on Sep 28, 2005 at 11:54 UTC

    What kind of port are you talking about? I was recently in Portugal, in the city of Porto and at the river (near the port), they served interesting wine, called portwine.

    But even if we restrict the discussion of ports to a computer, there are at least three ports commonly used that spring to mind:

    • The serial port, also known as RS-232 or RS-432 (hardware).
    • The parallel port, also known as IEEE 1294 (hardware).
    • The TCP/UDP port number of a TCP or UDP listening socket (software).

    If you are talking hardware, I would suggest a hardware solution. Either plug in the test system into a different port of the computer, or splice the cable to connect to two machines.

    If you are talking software, there are various ways of doing what you're interested in, at least if I understand what you are actually interested in.

    I interpret your statements in the software sense as having some software that expects to talk TCP/IP on a fixed port number. You want to make two (or more) instances of that software coexist on the same machine, preferrably on the same network adapter.

    If you cannot change the port number (the easiest way), you can maybe use your operating systems facilities. The *BSD family of operating systems has convenient port redirection facilities (even available as Perl module). If you cannot do this redirection (transparently) within your OS, you might want to look if you can run your local program via inetd which can launch a new process for every incoming connection.

    If that solution does not seem what you want, maybe you can specify what you want more clearly in terms of a comparison to POE, or Apache, or HTTP::Proxy, or even netcat.

    Thanks to calin for spotting various typos.

      Yup, there are lots of ports (for boats), ports (wine), ports (conversion to new platform), ports (hardware) and ports (soft). That's what makes asking google so much fun. We are talking IP ports here.

      Source server has a tcp/ip port out of which it spews data. Currently local software connects to that port, processes the data into a database. Source port only allows single connection and we can't play with it.

      Our idea is to have something running locally that will connect to source port, via TCP/IP, and then present the data to 2 local TCP/IP ports. Current software will then connect to one of these local ports. The other port is then free to connect another version of the client software, receiving the same data and the first.

      One idea is to have the local port act as a forking server so we can connect lots of clients, each would receive the same data. It's the sort of thing you'd use tee for on pipes.

      PS, all servers will be running Solaris. The source server and client software are proprietory code we have no control of. We can configure the client to connect to any source ip/port combination.

        The idea of using/creating a tee for ports is sound. The question remains on how to "best" implement it.

        Let me first make a few definitions as I'm confusing the terms a bit:

        Let the "remote server" be the machine that spews forth data on a single port. Let the "local server" be the program "tee for ports" you're trying to find/write. Let "client" be the program(s) that currently listen to the "remote server" but which you want to listen to the "local server".

        As it seems that your machine will spew forth its data without interaction, I (personally) would prefer "prerecorded" test sessions that can be replayed to a client, more like cat for ports than tee, but that effect can be easily achieved by writing the spewed data to a file from a specialized client and then having a simple looping local server spew data from a file, both more or less using sysread. Such a thing could also be implemented using netcat I think.

        So, let's now look at the premade wheels for a tee implementation. Let's assume that the clients will connect at an unspecified time and will also disconnect without telling the local server, likely because of a program crash. The local server should still serve the data to all remaining clients and not stall because one client isn't writeable anymore. Maybe it should also only store a fixed amount of data for any client before it considers the client dead, to prevent memory exhaustion.

        As we don't want to stall, the simple use of IO::Tee won't work, as it will block if the data can't be written to a specific client immediately.

        You could use threads together with Thread::Queue to distribute the incoming data from the remote server to all connected clients. This is likely the easiest way if you have a multithreaded Perl.

        If you have an unthreaded Perl, you could look at creating a multiplex server using POE with POE::Component::Proxy::TCP and some tee-logic. Personally, I would avoid POE in this case and write my own multiplexer using select or IO::Select and do the buffering and distribution of packets manually.

        Sorry, but there are no such things like "IP ports". IP is a network layer that hasn't the port concept, which is supported in TCP and in UDP. The fact that you talk about "TCP/IP" later does not help to clarify the matter - simply because the protocol stack is usually referred to as "TCP/IP", but UDP is part of the stack as well.

        Why bother about this distinction? Simply because using TCP or UDP could modify dramatically your mileage. You don't say how the intermediate machine is going to take the data, for one thing. Or which sub-connector will be the leader of the main connection, to say another.

        The whole mechanism is also quite obscure. You say that "local software connects to that port, processes the data into a database". But later you tlak about having a second client "receiving the same data and the first". There's a model ambiguity here. In the first case the model is more like a pull, in which the client enters and asks for some particular data inside the database. In the second we have something more similar to a push, in which any client is presented the very same data at every session.

        The latter case if of course by far the simplest. You only have to connect, download the data, then wait for any connection in other ports. OTOH, the former case is full of pitfalls, because you're basically asking a way to correctly interleave the requests coming from two clients over the same connection, which depends on the particular application.

        Flavio
        perl -ple'$_=reverse' <<<ti.xittelop@oivalf

        Don't fool yourself.
Re: Port duplicator
by zentara (Cardinal) on Sep 29, 2005 at 12:32 UTC
    From your description, you collect data from the remote port, which seems to indicate that the data is sent in 1 direction only. and that would seem to be simple.

    Your redirector script, would seem to be a basic "multi-echo server", simplified by the fact that all clients are listening. The big problem is writing everything inside a while(1) loop which constantly reads the datain socket, checks for new local connections, and echos all the datain to existing local connections.

    Without actually writing a working script, this is how I would proceed (step by step). This is untested. I'm not sure if you can use the same port number for your local connections, as for the remote. It would seem you could, but they are different in this example.


    I'm not really a human, but I play one on earth. flash japh
      Thanks for all the tips, here and in various books and places that monks frequent outside the monestry.

      This is what I have to date, a few little issues to sort out
      #!/usr/local/bin/perl -w # modules required use strict; use IO::Socket; use IO::Tee; use Fcntl; # get port number and server IP my $SERVICE = <DATA>; my $REMOTE = <DATA>; # connection to remote server (data source) my $remote = new IO::Socket::INET( PeerPort => $SERVICE, PeerAddr => $REMOTE, Proto => 'tcp', Type => SOCK_STREAM, ) or die "Can't connect: $!\n"; # local port to accept connections from probes my $local = new IO::Socket::INET( LocalPort => $SERVICE, Proto => 'tcp', Listen => 3, Reuse => 1, ) or die "Can't create server : $!\n"; # make local ports non-blocking fcntl($local,F_SETFL, fcntl($local,F_GETFL,0) | O_NONBLOCK); # build a hash for client connections # use a STDOUT so I can see what's happening on server locally. # can drop for production my %clients = ("StdOut"=>\*STDOUT,); my $buf; # loop forever while (1) { # if a probe connects add details to hash my $client = $local->accept(); $clients{$client} = $client if $client; # this bit remove clients that have disconnected foreach (keys %clients) { delete $clients{$_} if ($_ ne "StdOut" and !$clients{$_}-> +connected()); } # send data to all connected clients my $tee = IO::Tee->new(values %clients); print $tee $buf if (defined $remote and $buf = <$remote>); } __END__ 8099 <server ip goes here>
Re: Port duplicator
by Anonymous Monk on Oct 13, 2010 at 22:53 UTC
    Hi.. did u eventually get a proper solution? i am facing a similar issue thanks