mmm.. so you are trying to create a server process, that accepts data as fast as possible, and saves that to files on the harddisk, to be picked up later by a slow process that processes them. And the problem is with concurrency (no more than 8 clients connecting at the same time). And copying with scp (with .sshkeys, so you do not need a password, for example) will not do? This is a bit what MQ does.
Now, usually, you schedule each server a bit differently, so that the results are not send all at the same time. If the data is not being processed right away, pre-compress the data before sending it. Maybe add a handshake to your protocol, for example: cant' talk now, busy, come back in 10 seconds (5 seconds + a random number). caveat: verification of receiving the data and crc's, retry, etc will have to be added for robustness if you roll your own.
Now there are many client/server examples. There are many webservers, like httpi. Study those. As you see, the server listens on a port, but as soon as a connection is made, it forks a child process (and it counts the children, when it has more than x, it denies a new connection). The forked child process, using a virtual port, is able to receive data and process it at leisure (or, like you are asking, save it to a file (as $$.tmp, then when the file is closed, rename it to $$.dat or move it into a directory from where it will be processed) for a single process to pick it up)
Scenario2: many clients on a local lan, and the server is far away. And you want a proxy of some sorts, and only the proxy makes contact with that remote server?
Scenario3: You actually mean "named pipes" when you talk about "separate file just containing socket information". Thus, are asking if multiple processes can write to it, then be read by one process, and be send remotely. The latter: be careful, named pipes get full, and other processes can not write in between (or you get jumbled data). So you move the bottleneck to the local server. In this case, use local files and start an upload for each file. (or do all those 8 files need to arrive in a certain order on the server?)
What I would choose:
With the limited information I would go for running 8 scheduled scripts that extract different information from your single client (not sure if in parallel, or sequential), and write the data to a directory. Once all 8 are done, we start an uploader, that makes 1 connection to the server. The upload could be using scp -C to protect contect while traveling the network. The destination file is unique, for example, it contains the hostname. After sending the secure-copy file, a second file is send, which contains the MD5, so the server can know if the data is not corrupted, and send an email with an error message if there is something wrong with that file. Doing it like this allows you to follow each step and debug easily as looking inside directories. keep it simple, but as always, timtoady
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
| |
For: |
|
Use: |
| & | | & |
| < | | < |
| > | | > |
| [ | | [ |
| ] | | ] |
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.