in reply to Re^4: changing tcp parameters when establishing connection
in thread changing tcp parameters when establishing connection

Update2: I'm not sure why this is getting downvotes, but my technical information is very much correct. I have included a script that can impact the TCP Window size scaling factor at the end of this comment. Not sure why correct technical information is being downvoted, but at the possibility of getting more negative votes, I'm including full demonstration code that shows this is accurate at the end of this post. /Update2

Sorry, but that is clearly not what the RFC I linked to said. Quoting here:

The maximum receive window, and therefore the scale factor, is determined by the maximum receive buffer space. In a typical modern implementation, this maximum buffer space is set by default but can be overridden by a user program before a TCP connection is opened. This determines the scale factor, and therefore no user interface is needed for window scaling.

Update: So, breaking this down, the above quote says "the scaling factor is determined by the buffer space, which the application has access do. Thus we do not need to give the userland API additional control over further tuning on top of kernel support." /Update

Notice the one important piece here: userland applications can take advantage of window-scaling (again, provided the administrator and kernel are configured for such support, which I have mentioned several times now,) without needing any additional API.

Perhaps I am mis-understanding the point of contention for you, but you seem to be suggesting that there is no API that can control this. The RFC, and system-documentation, says otherwise. Obviously if the feature is disabled in the kernel or by the admin, one cannot have it. However, reading the RFC, it's possible for an application to avoid window-scaling by setting the buffers small enough that the kernel will not enable that feature.

I never said an application got to set kernel-wide params. I point out last comment and again here that they have an impact on it. It is very possible to change the buffers (which are socket-level options) which do have an impact on the window-scaling. Again, reference the quote above and the tcp manpage, referenced earlier in the thread.

Part of Update2: Code follows that shows you can impact the window-scaling factor from userland follows. If nothing else, this should help others understand how socket code works and what control userland has over the process.

So, if your kernel has a "pretty large" SO_RCVBUF by default, running this code (and presumably doing a tcpdump packet capture as you do so) will result in a default-window scaling option. On my Ubuntu OS, I get a window-size scaling factor of 128.

However, if you pass a very small buffer argument to the script, such as the value 2000 as the argument, you will see the packet capture gives you a window-size scaling-factor of 1! Bam, we just impacted the window-size scaling factor from userland. Pretty neat, right?

Now of course this requires that the feature be enabled in your kernel to begin with, but this shows how an application in userland can have some control over the coarse-option that a kernel and system administrator can set. These options are helpful if the programmer needs more direct control over a socket.

Here's the code:

use strict; use warnings; require IO::Handle; # let's us use sockets as objects. require IO::Socket::IP; # Implicitly included above, yet list it explicitly: require Socket; # When no arg1 is passed, the socket buffer section is skipped. # Otherwise, this is the value to set as the SO_RCVBUF on the socket. my $buf_size = shift; # Notably, we cannot use IO::Socket::IP or related classes. # Setting socket options (like SO_RCVBUF we do later) requires we do t +hem on # an otherwise *unconnected* socket. We handle this ourselves here: my $host = "google.com"; my $port = 80; socket(my $sock, Socket->AF_INET, Socket->SOCK_STREAM, Socket->IPPROTO +_TCP); defined $sock or die "no socket: $!"; # Set socket buffer, when requested: my $size; if ($buf_size) { my $rc; # Show current buf: $size = getsockopt($sock, Socket->SOL_SOCKET, Socket->SO_RCVBUF); die "No buf size??" unless defined $size; printf "Size of buffer before setting is: %s\n", unpack("i", $size +); # Set new size: $rc = setsockopt($sock, Socket->SOL_SOCKET, Socket->SO_RCVBUF, pac +k("i", $buf_size)); die "setsockopt() failed" unless ($rc); } # Regardless, show the current size: $size = getsockopt($sock, Socket->SOL_SOCKET, Socket->SO_RCVBUF); die "No buf size after set??" unless defined $size; printf "Size of buffer is now: %s\n", unpack("i", $size); print "Sleeping 2..\n"; sleep 2; my $hints = { family => Socket->AF_INET, protocol => Socket->IPPROTO_TCP, }; my ($err, @addrs) = Socket::getaddrinfo( $host, $port, $hints ); die "getaddrinfo error: $err ($!)" if ($err); my $ai = shift @addrs or die "no addresses for the host!"; connect($sock, $ai->{addr}) or die "Connect failed: $!"; $sock->autoflush(1); # Use Perl/IO (built on Standard I/O) to send a request for data: $sock->print("GET /\n") or die "Socket error on write: $!"; # Read loop, printing out data from socket to STDOUT. my $rc; while (1) { $rc = $sock->read(my $buffer, 1024); # catch socket errors (excluding EOF): unless (defined $rc) { die "Socket error on read: $!"; } # EOF returns 0, which exits the read loop: last if ($rc == 0); # Otherwise print the line, which may be a partial line. #printf "%s\n", $buffer; } print "\n";

Replies are listed 'Best First'.
Re^6: changing tcp parameters when establishing connection
by BrowserUk (Patriarch) on Dec 03, 2015 at 21:04 UTC

    What you are describing is this:

    1. If your tcpip stack supports window scaling;
    2. And if it is enabled for your machine and your userid/group;
    3. And if you set the receive buffer size immediately after connect
    4. And if the remote host tcpip stack also supports window scaling;
    5. And if it is enabled;
    6. And if all the firewalls, gateway devices and bridges between the ends support window scaling; have it enabled and are set to monitor the scaling factors;
    7. And if the bandwidth-delay product between them indicates a benefit from increasing the buffer size;
    8. And if the application retrieve rate is sufficiently high to merit it;
    9. And if the stack-wide maximum scaling factor permits it;

    Then setting the receive buffer size may, heuristically allow an application to influence the setting of the window size/scale factor product.

    That's an awful lot of ifs & buts, to be claiming "an application can set the window size".


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.

      In the name of avoiding a reply-war, I'll make this (brief) comment my last in the thread. For the record, I never said "an application can set the window size". What I did say is that an application can:

      tune window-scaling per-socket

      ..and..

      impact the window scaling

      ..and..

      adjust this in the application level (where "this" is socket buffer sizes)

      from the application. Apologies if there was any confusion where someone thought I said you had direct application-level control over the window-scaling. The application can influence this, which is described not only in system manpages, but in RFC-1323, section 2.1. This has been my only point for a while, and it's not worth any more attention, I suspect.

        I never said "an application can set the window size"

        But the OP asked: "how can i change the tcp parameters (i.e. tcp window size, scaling"; and the answer is: he cannot.

        The application can influence this

        Great. You've outlined a mechanism for potentially effecting some indeterminate change in the window size/scale factor product. Now show how to do something useful with it.

        The primary benefit of adjusting those (that) parameter is to improve throughput by tailoring the effective buffer size to the bandwidth and latency of the link.

        To make the adjustment, you need to know:

        • The effective bandwidth;
        • The effective latency;
        • The application retrieve rate.

        But, none of these factors are available to application code. Not only is that information not available from the stack; even if it was, it would do you no good because the only parameter you can tune is the retrieve buffer size; and that has to be done before the connection is made; so you cannot even try to instrument these from within the application and adjust in light of your findings.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
        In the absence of evidence, opinion is indistinguishable from prejudice.