soka. hmm. but... I tested this once; program did two HTTP
requests, with one that closed the socket after it found what
it wanted from response headers, other read it all through.
The first one was faster, so I assumed that somehow killing
the socket saved me time. Guess it was just coincidence,
I didn't really repeat the test, it was only a side effect.
Now, forgive my ignorance on TCP issues (I did tutorial on
it but it mostly dealed with ACK/SYN ping-pong), but how does
it decide when to download, then? When I open a socket, I
automatically start sucking in data to the buffer? So if I
opened a socket to connection that would just keep feeding
and feeding data, and left the socket alone, I'd get buffer
overflow eventually?-) | [reply] |
The deal is this. Every socket has a buffer of arbitrary size (potentially able to overflow). When you open up a TCP connection you probably decide on some protocol layer to cover over TCP, perhaps HTTP, FTP, etc. This layer will determine what the server sees (or wants to see anyway) and what the client sees. Once the ACK handshake is established, it is entirely up to the protocol to decide what happens next. For all you know, the client could send some info, wait for the server to process it and send it back- like echo servers. If the server takes alot of time to process it, than the only thing the client can do is wait- perhaps with a UN*X select. Your problem is that you are imagining a socket as an ever-flowing pipe, which is not true at all. Yes, TCP provides options for LINGERing and SYN-checking, but otherwise, data retrieval can be erratic or spontaneous. With TCP (a connection-oriented protocol), things come in in certain amounts of data called packets (which involves an IP layer, a TCP layer, padding, and other junk).
I have an old book that tells me most UN*X sockets buffers are at least a few Mb. Nowadays, I'm sure they're larger, especially with the onslaught of 100Mb/s stuff. If you process quickly enough, you're unlikely to ever encounter a buffer overflow, but if you do, the server is notified by TCP that the packet was not received (since the buffer is full) so the server will attempt to send it later, up to some arbitrary amount of time. That's TCP! You don't even have to worry about it! When you read in <SOCKET>, youmay be reading in one or more packets- or you may block, since the buffer is empty. You can pretty much read in the socket buffer as you wish, without worrying about details such as setsockopt, etc. When you initially open a socket, it's empty. Only when a confirmed and triple-checked TCP packet is received is an ACK sent and the actual data part of the packet returned to your perl script via the buffer.
Yes, if you close the connection after the data you have is received, nothing lethal happens. This is perfectly legal and an easy way to speed up the program. The TCP layer will alert the server that the connection has been closed (in TCP, you can half-close connections- but that's a different story) and the server won't bother to send anything more. In this case, it's just like reading a file: read as much as you need and scrap the rest. Why would you even need to read the rest? So, closing the socket when done with it, is perfectly fine.
AgentM Systems nor Nasca Enterprises nor
Bone::Easy nor Macperl is responsible for the
comments made by
AgentM. Remember, you can build any logical system with NOR.
| [reply] |