Enhancing Network Performance with Packet Pacing


Akamai’s Smooth Delivery is a network performance enhancement initiative spearheaded by the Protocol Optimization team.  It consists of two parts – each focused on reducing congestion while enhancing network performance. 

  1. TCP Pacing – The topic of this post
  2. Rate Limiting – Upcoming post

Smooth Delivery_1.png


TCP packets are often served in bursts in response to a client request.  This bursty packet behavior can increase peak network bandwidth demands and result in congestion and higher retransmission rates. 

High retransmission rates often lead to:

  • A decrease in goodput which may lead to undesired rebuffering for video traffic
  • A much lower quality of experience for end users
  • Less user engagement due to bad network performance
  • Utilization of server resources that could be used for other tasks

For example, three concurrent TCP flows on the same network might interact as follows:

Bandwidth Demand Without Pacing Smooth Delivery_2.png

 Notice how the peak bandwidth is the cumulative demands of each flow.  Also notice that that for a majority of time, the network is idle.

Smooth Delivery Pacing (SDP) utilizes Linux fair queues and pacing to manage the flow of TCP packets.  Instead of bursting out the packets at the beginning of the round-trip time (RTT), the packets are paced out one by one over a fraction of the RTT.  With SDP, the same throughput can be achieved with less burstiness that often leads to congestion.  End users still receive all of the data they were expecting without any degradations in quality. This helps to reduce congestion which leads to a reduction in retransmission rates. 

Low retransmission rates lead to:

  • An increase in goodput which may reduce rebuffering for video traffic
  • A much higher quality of experience for end users
  • More user engagement due to a high performing network
  • Lower utilization of server resources  

The image below illustrates the bandwidth demand when pacing is enabled.

Bandwidth Demand with Pacing

Smooth Delivery_3.png


Smooth Delivery Pacing was enabled on the Akamai network in late 2017 and has had a good overall impact on retransmission rates and goodput.  On average, TCP retransmissions were reduced by 10% and goodput increased by up to 40%.

Smooth Delivery_4.png

We also see great results when zooming in to the customer level.  For example, a customer that utilized a Multi-CDN approach that adjusts the amount of traffic to each CDN depending on each network’s performance has allocated more traffic to Akamai with our improved metrics. With pacing enabled we observed a 6% reduction in rebuffering and a 36% increase in the customer’s traffic served!  Pacing benefited the customer, their end users, and also Akamai’s business.

Next Steps

Smooth Delivery Rate Limiting (SDRL) is the second part of Smooth Delivery story that will further increase network efficiency. It will allow us to decide, on a per connection basis, how much bandwidth to use and whether to increase or decrease the max bandwidth based on the needs of the application. In turn, SDRL will decrease congestion by reducing bursts and limiting the competition for bandwidth.

It is estimated that SDRL, depending on how it is applied, can reduce congestion and peak bandwidth demands up to 4%.  We are currently in the process of formulating experiments to help us better understand the full potential impact of Rate Limiting.

Thanks for taking the time to learn a little about Smooth Delivery and stay tuned for Part 2 of the story. 

*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Darren Ng. Read the original post at:

Cloud Workload Resilience PulseMeter

Step 1 of 8

How do you define cloud resiliency for cloud workloads? (Select 3)(Required)