SBN

CDN Tuning for OTT – "Why Doesn’t it Already do That?"

When you initially onboarded your OTT traffic to a CDN, you probably went with default settings. And to be honest, why wouldn’t you? A standard media configuration is designed for the short http-based segment delivery at scale. It removes the bottleneck of your origin connectivity, taking you from a couple of hundred Mb/s to several Gb/s (and Tb/s) of OTT traffic to your customers as your service grows. So, do you understand what this standard configuration is doing, and how to fine tune it for your current use case?

CDN for OTT image one.PNG

What Are You Getting, Out of The Box?

A media configuration is designed specifically for chunk-based delivery over http or https.

CDN for OTT image two.PNG

So, All Good so Far. Why Revisit it?

  • If you haven’t revisited your media configuration for some time, you may see some changes to the self-service options. These are mainly about fine-tuning to be more specific to your requirements. By understanding where your origin is and your audience distribution is, even if you’ve moved to smaller fragments or higher resolution since launch, underlying settings can improve your performance. As an example, if you’ve attempted to reduce your latency by reducing your fragment length, then the TCP connect timeouts, retry settings and even peer request timeouts would work better if they were aligned with your fragment length. If you’ve tweaked your
  • bitrate ladder, adding higher bitrates and larger resolutions may mean that your TCP settings need to be adjusted to provide the right performance. If you originally set your audience location to unknown or global, and are actually serving a regional audience, configuring this correctly will improve cacheability and performance. Even adjusting the popularity characteristics can improve performance by moving long-tail content to a lower cache turnover footprint so your content is less likely to be evicted from cache by other short-lived Internet traffic.
  • Now that you’ve fine tuned your configuration for a more appropriate audience footprint, selected the correct origin location, understood your content popularity and even adjusted to match your latest encoding ladder and fragment size, what more needs to be done? For most OTT customers, this out-of-the-box configuration with use-case- appropriate self-tuning will serve them well, even when they are serving hundreds of Gb/s of linear simulcast traffic. The CDN will handle scaling, improve throughput to origin in contrast to direct connections, mitigate DDOS attacks, and even provide additional features such as authentication or edge-based logic rules. The usual CDN functionality such as logic for origin health detection, origin failover and retry options, and dynamic construction of response objects such as manifests can be added in addition to the defaults through self service.

Now we can start to look at KPIs you’ve established as your service has matured and tune your delivery configuration around those KPIs. You don’t need to be serving Tb/s of traffic to know that a percentage of traffic will still go back to your origin. Is your origin particularly sensitive to load spikes? Do you want to reduce your origin offload even further? Do you have a KPI around client buffering instances? One of the questions we usually get when we propose fine-tuning a configuration for a specific solution is “why doesn’t it already do that?” Well, the answer can often be complex, but it boils down to balancing customer-specific business objectives that may not always be mutually compatible. Out of the box, a media configuration offers a balance of reliability, performance, and offload. First and foremost, it is designed to scale. When you start to tweak in favour of one of those objectives, it is often at the expense of one of the others, and it is ultimately understanding the business priorities of a service that allows us to provide the right solution. One of the tuning tweaks we can introduce, for example, is mid-tier retry. The configuration constantly measures the response throughput for a forward object request to a parent. If it falls below certain throughput thresholds, it opens a simultaneous concurrent connection. The object will be delivered from whichever connection returns it first. It can even construct a large object response from partial byte-range requests via separate routes. This is good for mitigating mid- tier slowness, especially involving network providers peering over the public Internet, and improves KPIs based around buffering, but at the expense of additional traffic requests going to the origin. If offload is your key objective (for example, you can’t dynamically scale your origin farm, or you are sensitive to egress costs or demand spikes), we can look to see if the traffic distribution would benefit from the introduction of an additional cache tier or additional peer requests. Additional parent or grandparent tiers can provide more offload, especially where the audience is more widely distributed. However, additional tiers introduce additional hops — something you want to be avoiding if zero-buffer performance or ultra-low latency is your main KPI. The CDN can also smooth spikes at the edge by queuing requests whilst a forward object request is “in flight.”

CDn for OTT image three.png

Popular items can push other objects out of edge cache storage during a demand lull. Sometimes cache eviction is unavoidable, even when an object is within TTL, due to the finite edge capacity. As CDNs continue to add edge capacity, it is nonetheless shared among other streaming traffic. Large customers with high traffic needs can even request custom maps, specifically targeting their audience footprint and origin location. You can even scale up to a managed or licensed CDN deployment that offers dedicated machines for your traffic. Even without a dedicated managed CDN (MCDN), understanding your content’s popularity (long tail, etc.) can assist in selecting a delivery map where you have more chance of a cache hit and objects aren’t competing with short-lived content for cache footprint.

Other adjustments — checking with non-obvious edge peers for objects before going forward to a cache parent, tweaking retry counts before taking any fail action, queuing requests for the same object or response code on the edge, manipulating cache keys, consolidating or spreading cache footprint, failing over to alternate maps, etc. — all have some degree of compromise. They will either favour performance, reliability, or offload, usually at the expense of the others. However, if your business objectives need weighting in one specific area, we can help give the config a nudge in the right direction, by engaging with Professional Services Consultants and ensuring we understand your KPIs.

Similar under-the-hood adjustments can be made if you are pushing the boundaries with either ultra-low latency or extreme VR/UHD bitrates and need specialist tuning such as object pre-fetching to maximise performance or to give you that buffer zone on reliability. When your OTT traffic levels start to hit Tb/s, even the smallest tweak — such as adding or reducing a second in TCP retry or timeout settings — can have a noticeable impact on overall performance and crucial operational KPIs that you report back to the business.


*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Akamai. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/S6K28wwwV2w/cdn-tuning-for-ott---why-doesnt-it-already-do-that.html