A
Aaron E
Guest
I have an environment with a few hundred Linux (CentOS/RedHat) servers that all interact with each other through TCP connections. Above the TCP layer is an RPC protocol that has timeouts built into it such that the application will wait for about 10 seconds for a response after sending an RPC request before it considers the request to have timed out and the application takes alternate/corrective action to handle the timeout. I have been observing very intermittently that the RPC layer is logging a timeout, meaning the RPC layer on one end of the connection has sent an request to the other end but an RPC response has not yet ben received.
I am still trying to isolate whether the issue is at the TCP or RPC layer.
To investigate the TCP layer, I've been gathering samples of "netstat -pan" output on every server in the environment, every few seconds, over the course of a few days.
What I've noticed is that when I have a timeout for a request sent from server A to server B, the send queue size is non-zero (as shown by netstat -pan) for the TCP connection on server B. I am presuming that the RPC response has been partially or fully placed into the TCP send queue at that point but for some reason is not being transmitted.
I want to confirm my suspicion and dive further into the root cause. For this type of issue, my go-to is using tcpdump to capture the traffic from both ends of the connection and then analyze it with WireShark to understand the behavior. Unfortunately, given the number of servers involved, the intermittent nature of the issue, and the very high rate of traffic flow (well utilized 10GbE), it is not feasible to run tcpdump for an extended period of time across all machines in the environment.
As the send queue size is staying the same non-zero value for >10 seconds, what I'd like to do is a simple poll of the TCP state from all open connections on a machine every few seconds. Specifically, what I'd like to retrieve for each TCP connection is:
1. The last window size and scale factor advertised by the server
2. The highest acknowledgement number sent by the server for the connection along with any unacknowledged extents (as selective ack is enabled)
3. The next sequence number the server will use to transmit on the connection
4. Whether the TCP socket is corked (as the application uses TCP_CORK)
From this information, I should be able to identify if data has been transmitted but not acknowledged, if the window is exhausted and hence no more data can be sent, or if one of the servers is hitting an application layer bug where the TCP cork is in place and thus the RPC response is being blocked by some logical error in the application.
Unfortunately, I've looked around and could not find a *convenient* way to retrieve such information. I've checked through all the info I could find on the netstat and ss commands, and I've looked through a lot of /proc content but no luck.
I presume something like systemtap could be used, or a custom kernel could be built. But neither of those are particularly attractive options unless there is nothing else that can be done to retrieve the TCP state information for each open connection from a user space process.
Any ideas?
I am still trying to isolate whether the issue is at the TCP or RPC layer.
To investigate the TCP layer, I've been gathering samples of "netstat -pan" output on every server in the environment, every few seconds, over the course of a few days.
What I've noticed is that when I have a timeout for a request sent from server A to server B, the send queue size is non-zero (as shown by netstat -pan) for the TCP connection on server B. I am presuming that the RPC response has been partially or fully placed into the TCP send queue at that point but for some reason is not being transmitted.
I want to confirm my suspicion and dive further into the root cause. For this type of issue, my go-to is using tcpdump to capture the traffic from both ends of the connection and then analyze it with WireShark to understand the behavior. Unfortunately, given the number of servers involved, the intermittent nature of the issue, and the very high rate of traffic flow (well utilized 10GbE), it is not feasible to run tcpdump for an extended period of time across all machines in the environment.
As the send queue size is staying the same non-zero value for >10 seconds, what I'd like to do is a simple poll of the TCP state from all open connections on a machine every few seconds. Specifically, what I'd like to retrieve for each TCP connection is:
1. The last window size and scale factor advertised by the server
2. The highest acknowledgement number sent by the server for the connection along with any unacknowledged extents (as selective ack is enabled)
3. The next sequence number the server will use to transmit on the connection
4. Whether the TCP socket is corked (as the application uses TCP_CORK)
From this information, I should be able to identify if data has been transmitted but not acknowledged, if the window is exhausted and hence no more data can be sent, or if one of the servers is hitting an application layer bug where the TCP cork is in place and thus the RPC response is being blocked by some logical error in the application.
Unfortunately, I've looked around and could not find a *convenient* way to retrieve such information. I've checked through all the info I could find on the netstat and ss commands, and I've looked through a lot of /proc content but no luck.
I presume something like systemtap could be used, or a custom kernel could be built. But neither of those are particularly attractive options unless there is nothing else that can be done to retrieve the TCP state information for each open connection from a user space process.
Any ideas?