Using profile tcprtt

    The profile tcprtt gadget generates a histogram distribution of the TCP connections' Round-Trip Time (RTT). The RTT values used to create the histogram are collected from the smoothed RTT information already provided by the Linux kernel for the TCP sockets.

    The histogram considers only the TCP connections that have been already established, so it does not take into account the connection phase (3-way TCP Handshake). If it is what you are looking for, please check the latency information the trace tcpconnect gadget provides. See further information here .

    By default, the profile tcprtt gadget generates one single histogram per host, and one per node in case of Kubernetes. However, it also provides multiple ways to analyze specific connections. For instance, we can generate multiple histograms separated by local IP addresses to explore all the connections generated by each local address. And the same can be done for remote IP addresses. In addition, it is also possible to filter by a specific local and/or remote address to isolate the analysis.

    On Kubernetes

    First of all, let’s start the gadget on a terminal:

    kubectl gadget profile tcprtt
    

    In another terminal, create a server using nginx:

    kubectl create service nodeport nginx --tcp=80:80
    kubectl create deployment nginx --image=nginx
    

    And then, create a pod to generate some traffic with the server:

    $ kubectl run -ti --privileged --image wbitt/network-multitool myclientpod -- bash
    # curl nginx
    # curl nginx
    

    If we move back to the first terminal and stop the gadget, it will generate the histograms:

    All Addresses = ****** [AVG 1211.066824]
            µs               : count    distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 114      |***********                             |
            16 -> 31         : 397      |****************************************|
            32 -> 63         : 182      |******************                      |
            64 -> 127        : 49       |****                                    |
           128 -> 255        : 48       |****                                    |
           256 -> 511        : 107      |**********                              |
           512 -> 1023       : 108      |**********                              |
          1024 -> 2047       : 86       |********                                |
          2048 -> 4095       : 31       |***                                     |
          4096 -> 8191       : 111      |***********                             |
          8192 -> 16383      : 28       |**                                      |
         16384 -> 32767      : 11       |*                                       |
    

    Take into account that the generated histogram considers all TCP connections for each node, not only the ones we established. Notice that we used a cluster with a single node for this guide.

    So, let’s repeat the test but this time filtering by remote address so that we can analyse the traffic we are generating toward our nginx service:

    $ kubectl get service nginx -o jsonpath={.spec.clusterIP}
    10.0.38.234
    $ kubectl gadget profile tcprtt --raddr 10.0.38.234
    All Addresses = ****** [AVG 2087.000000]
            µs               : count    distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 0        |                                        |
           128 -> 255        : 0        |                                        |
           256 -> 511        : 0        |                                        |
           512 -> 1023       : 0        |                                        |
          1024 -> 2047       : 3        |****************************************|
          2048 -> 4095       : 3        |****************************************|
    
    

    Now, let’s use the network emulator to introduce some random delay to the packets and increase indirectly the RTT:

    # tc qdisc add dev eth0 root netem delay 50ms 50ms 25%
    # curl nginx
    # curl nginx
    

    Now the average RTT value of the new histogram is clearly higher:

    $ kubectl gadget profile tcprtt --raddr 10.0.38.234
    All Addresses = ****** [AVG 68973.833333]
            µs               : count    distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 0        |                                        |
           128 -> 255        : 0        |                                        |
           256 -> 511        : 0        |                                        |
           512 -> 1023       : 0        |                                        |
          1024 -> 2047       : 0        |                                        |
          2048 -> 4095       : 0        |                                        |
          4096 -> 8191       : 0        |                                        |
          8192 -> 16383      : 0        |                                        |
         16384 -> 32767      : 0        |                                        |
         32768 -> 65535      : 3        |****************************************|
         65536 -> 131071     : 3        |****************************************|
    
    

    With ig

    Start the profile tcprtt gadget on a first terminal:

    sudo ig profile tcprtt
    

    Then, start a container and download a web page:

    $ docker run -ti --rm --cap-add NET_ADMIN --name=netem wbitt/network-multitool -- /bin/bash
    # wget 1.1.1.1
    

    Moving back to the first terminal and stopping the gadget, it will generate the histograms:

    All Addresses = ****** [AVG 2343.510333]
            µs               : count    distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 25       |                                        |
             8 -> 15         : 226      |**                                      |
            16 -> 31         : 777      |********                                |
            32 -> 63         : 1532     |****************                        |
            64 -> 127        : 2822     |*******************************         |
           128 -> 255        : 2254     |************************                |
           256 -> 511        : 3305     |************************************    |
           512 -> 1023       : 2863     |*******************************         |
          1024 -> 2047       : 1284     |**************                          |
          2048 -> 4095       : 1456     |****************                        |
          4096 -> 8191       : 3612     |****************************************|
          8192 -> 16383      : 167      |*                                       |
         16384 -> 32767      : 0        |                                        |
         32768 -> 65535      : 14       |                                        |
         65536 -> 131071     : 75       |                                        |
        131072 -> 262143     : 0        |                                        |
        262144 -> 524287     : 0        |                                        |
        524288 -> 1048575    : 8        |                                        |
    

    This histogram represents the distribution of the RTT for all the TCP connections established in the host and not only the TCP connections we established in the test container.

    Let’s repeat the test but this time filtering by remote address so that we can analyse the traffic we are generating:

    $ sudo ig profile tcprtt --raddr 1.1.1.1
    All Addresses = ****** [AVG 7359.357143]
            µs               : count    distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 0        |                                        |
           128 -> 255        : 0        |                                        |
           256 -> 511        : 0        |                                        |
           512 -> 1023       : 0        |                                        |
          1024 -> 2047       : 0        |                                        |
          2048 -> 4095       : 0        |                                        |
          4096 -> 8191       : 9        |****************************************|
          8192 -> 16383      : 5        |**********************                  |
    

    Now, let’s introduce some random delay to the packets to increase indirectly the RTT using the network emulator :

    # tc qdisc add dev eth0 root netem delay 50ms 50ms 25%
    # wget 1.1.1.1
    

    And, regenerate the histogram to see the change in the RTT:

    $ sudo ig profile tcprtt --raddr 1.1.1.1
    All Addresses = ****** [AVG 72278.307692]
            µs               : count    distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 0        |                                        |
           128 -> 255        : 0        |                                        |
           256 -> 511        : 0        |                                        |
           512 -> 1023       : 0        |                                        |
          1024 -> 2047       : 0        |                                        |
          2048 -> 4095       : 0        |                                        |
          4096 -> 8191       : 0        |                                        |
          8192 -> 16383      : 0        |                                        |
         16384 -> 32767      : 0        |                                        |
         32768 -> 65535      : 0        |                                        |
         65536 -> 131071     : 13       |****************************************|
    

    We can see how the average RTT passed from 7359.357143 to 72278.307692.