The Transmission Control Protocol (TCP) drives major internet operations such as video streaming, file transfers, web browsing, and communications, accounting for very high percentages of fixed access internet traffic and even more of mobile internet traffic. Surprisingly, the TCP performance is yet to reach its full potential. Sub-optimal TCP performance translates into undesirable consequences for the communications service providers, who struggle hard to make the most out of expensive resources, to combat operational inefficiencies, and to provide a high quality of experience to the subscribers.
A CSP can address this issue by implementing a TCP optimization solution that can actually contribute to enhanced TCP performance, which would manifest as increased goodput, improved network efficiency, high TCP transfer speeds, lower retransmission rates, and more consistent TCP round-trip times. So how can you optimize TCP performance? The process of TCP optimization involves the deployment of various techniques to monitor and regulate TCP connections. TCP optimization solutions come in various forms with “one-box” and “two-box” solutions being the most widely-used options. A one-box TCP optimization solution is deployed between two TCP endpoints and its main purpose is to take charge of the communication between the client and the server. Two-box solutions, on the other hand, are deployed at two endpoints and are meant for settings where the service provider exercises total control over both endpoints.
Techniques to Improve TCP Performance
- Decreasing the time to reach available bandwidth: A TCP server is not aware of the properties of the network over which it is delivering content and that is the reason why TCP is designed to examine the network to define the available bandwidth. This stage is referred to as the TCP SlowStart and it gradually transmits increasing amounts of data up to a certain point, after which it modifies its behavior to match the characteristics of the network. The SlowStart phase commences with a congestion window size of one, two, or ten segments. The server maintains the congestion window and this window determines how many segments remain outstanding at any given point in time. Notably, the value of the congestion window continues doubling within each acknowledgement packet (ACK). This doubling continues until a packet loss is identified. A TCP optimization solution works by reducing the time to reach available bandwidth by splitting the latency between the subscriber network and the Internet network and deploying techniques to optimize TCP performance on both sides of the connection.
- By pre-acknowledging data: When a TCP acceleration solution pre-acknowledges data on behalf of the client, it works to optimize TCP performance. The faster the server sees the ACKs, the faster the data transmission and the entire SlowStart phase will be accelerated. Certain TCP acceleration solutions deploy other innovative techniques to accelerate SlowStart and deliver better outcomes.
- By preserving available bandwidth: Maintenance of available bandwidth is crucial to maintaining service quality. Non-congestion events are often interpreted as congestion by TCP’s congestion control algorithms, which is why a slowdown results when the sender decreases its congestion window. Such faulty slowdowns are typically caused by packet-reordering or pauses and also by packet losses that are not caused by congestion. A TCP optimization solution helps in maintaining transmission speeds by ensuring that the sending rate is not reduced unnecessarily and even if the server actually decreases the rate, the solution should promote quick recovery.
- By adjusting to changes in available bandwidth: Changing demands, aggregation layers, and shared resources promote changes in network capacity, thereby providing opportunity for improvement of TCP performance. For example, the server has to back off immediately if bandwidth decreases and use available bandwidth easily if it increases. An effective TCP optimization solution can help CSPs achieve these results in an efficient manner.
- Managing packet-loss in over-dimensioned networks: Congestion is not a typical problem with modern networks. So congestion cannot be solely blamed for packet loss. Packet loss may be caused by several other issues including hardware problems, queue overflows, damaged cables, TCP checksum errors, and faulty memory. The recovery process is quite slow and this is the reason why a TCP optimization solution can prove to be really useful. Such a solution can speed up the recovery process while reducing the time to reach available bandwidth.
- Addressing congestion in over-dimensioned networks: When the bandwidth into a resource becomes higher than the bandwidth out of that resource, congestion results. Today’s highly-advanced networks prevent packet loss and consist of resources with large queues that can momentarily store excess data till the time it can be transferred. When the buffers overflow, packets are dropped. Though the process of queueing reduces packet drops to a significant extent, it contributes to delayed delivery times and increased round-trip times. An effective TCP acceleration solution can prevent excessive queuing and packet drops at the same time.
A CSP should take into account factors such as solution scalability and overall traffic optimization strategy while choosing an appropriate TCP optimization solution. The CSP should have a clear understanding of how TCP performance can be enhanced and how the success can be measured.
Read “Keep It Simple; Make It Scalable: 6 Characteristics of the Futureproof Load Balancer” to learn more.
Fabio is Technical Director EMEA-CALA, responsible for Systems Engineering in the theater. With a long experience, he began his career in software development for aerospace systems before getting into IT vendor ecosystem with Bay Networks/Nortel and Juniper Networks, up to being Technical Director EMEA for the Telecom, Cloud and Content businesses.
Fabio writes about technology strategy, trends and implementation.
This is a Security Bloggers Network syndicated blog post authored by Fabio Palozza. Read the original post at: Radware Blog