Studies show that video consumption demand is growing faster than capacity. While capacity overhead does exist, current trends suggest that we’ll see more bottlenecking on last-hop connectivity, especially with regionally targeted content.
By pushing “vanity bitrates” — those topmost speeds that our engineers can deliver — across all regions and devices, we’re hitting that congestion spot earlier. Sometimes this forces clients to adapt their bitrate requests and step down. Other times, they step up unnecessarily, without a perceivable benefit in quality.
Over shared infrastructure, this has a knock-on effect for everyone. When demand for that service exceeds expectation, the capacity to provide all users their full bitrate at the same time just can’t be built in. Laissez-faire encoding decisions can have a real impact on a telco’s architecture.
Live video isn’t a video on demand (VOD) or game download where congestion hiccups may have minimal impact. Congestion at any point in the over-the-top (OTT) distribution chain can ruin the experience for your customer.
Moving Content to the Edge With CDN Gives Customers a Cache Hit at the First or Second Hop
Regional Internet Service Provider (ISP) network infrastructures weren’t built for situations where a large number of its fast-last-mile customers consumed high-bitrate live video at the same time. User connectivity is over-provisioned based on average throughput, with some overhead for spikes.
This worked when broadband first rolled out. If you were the only person on your street watching cat videos, you were fine. As soon as your neighbour started watching dog videos, the local exchange suffered. Everyone connected to that cabinet or exchange experienced a slowdown. People got faster and faster pipes to compensate.
Meanwhile, content caches, colocated in the ISP’s infrastructure, got closer to the end user. This helped eliminate congestion at the origin and reduce peering costs, but the bottlenecks remained between the edge cache and the user’s last-mile connectivity.
Today, bitrates are increasing, and content distribution network (CDN) capacity is continually being added. We’re also pushing CDN caches further into that telco architecture. This helps mitigate internal unicast traffic levels at high concurrency.
But video demand is still expected to grow faster than the infrastructure to deliver it.
Vanity Bitrates Can Ultimately Lead to a Poor Viewing Experience
Vanity bitrates have become the norm because the metric was the customer’s connectivity speed. It’s easy to make the case that your target customer has an X Mbps pipe, and that’s still larger than your highest bitrate.
But the customer’s last-mile connectivity is no longer the limit you need to keep an eye on. You may be targeting customers with HDR 120 Hz TVs who are also likely to be fast-fiber customers — but at some point in the chain, you’re still delivering over a finite architecture.
And when you push to hundreds of thousands of users in a single region, you start to quantify your traffic levels in Tb/s. This is how ISPs measure total regional capacity. And for some regions or ISPs, that number is still in single digits.
At these levels, the customer’s contractual connection speed is no longer the reference point of concern. It’s the in-region capability to deliver that. Fortunately, CDNs are developing and evolving tools to help us better understand and monitor all the factors that influence a customer’s end performance, so that we can target these directly.
Achieving “Better than Broadcast” Video Step-by-Step
As we move toward an “as-good-as or better-than broadcast” goal for OTT services, simply upping the bitrate isn’t the right approach.
In 2018, streaming the Indian Premier League cricket match hit over 10 million concurrent users1. However, the average bitrate was still below 1 Mbps, preventing localised connectivity provider congestion that may have happened at higher bitrates.
I recommend constraining vanity bitrates to limited audiences or closed trials. Then, when you open the service to a wider audience where you can’t accurately estimate the peak demand, you’ll be in a better position to sanity-check your encoding ladder and efficiency.
By optimising the top two bitrates on your ladder, you should be able to serve more customers before you run into localised congestion issues. You might even be able to avoid them altogether. Whether you use PSNR, SSIM/SSIMplus, or VMAF values to quantify bitrates vs. quality, it pays to perform regular checkups and assessments.
The point at which you’ll see diminishing returns of perceptive quality against bitrate varies from device to device. But I suggest you use an objective quality assessment against your encoding ladder. This is more efficient and helps prevent problems further down the distribution chain.
Online tools to compress web images in-flight or in-cache and enable your pages to load faster have been available for years, either as part of a production workflow or CDN service. The same principle can be applied to VOD transcoding, using services like LightFlow.
Live video is more challenging, and the onus is on the encoder to get the choices right at the encoding profile stage. Still, you can improve the viewing experience across all devices if you start with content preparation.
It’s Time We Re-Learned the Art of Video Encoding and Justified that Encoding Ladder
Encoding efficiency has become a lost art. Though some OTT content companies are once again moving in the right direction, most have been selecting vendor presets and using stepped bitrate as their indicator of quality.
Although there is a direct correlation between video quality and bitrate, it doesn’t necessarily always mean that a 1080p video using a vendor’s default preset at 3 Mbps is higher quality than an optimised encoding of a 1080p video at half that bitrate. You can achieve the same quality using lower bitrate when you have a better understanding of the parameters, rather than just selecting a handful of presets that expose nothing other than target bitrate.
For example, switching to lower chunk sizes makes your video quality worse for the same bitrate. For a codec to be effective, it needs to use compression methods from one keyframe to another. The longer time between keyframes, the more efficient the compression can be.
A long group of pictures (GOP), where the keyframe is at the start of the chunk, suddenly becomes a short GOP when you reduce the chunk size. You need to maintain a keyframe at the start of the chunk. You may be targeting playback latency, but you’re indirectly making your video quality worse for the same bitrate.
When you push the video bitrate up to compensate, it affects your scalability and the bottom line. Your encoding ladder impacts your OpEx costs. Taking the time to understand how your codec parameters directly affect compression offers real, quantifiable monetary savings.
Most content providers pay by the GB delivered or IP transit out of the origin. There may be additional costs for storage if your live stream becomes DVR or archived on-demand streams.
Vanity Bitrates Can Have High Business Costs, Not Just OpEx
As the industry moves toward CDN flat-rate billing for larger customers, the delivery cost argument loses its impact. But when demand for service exceeds expectation, your encoding decisions can still have a significant impact on a telco’s architecture. This in turn can have a knock-on effect to your OTT service business reputation.
You can mitigate this with the decisions you make on your encoding ladder and profiles. This will become more critical if, as expected, OTT demand for live concurrent streams grows faster than CDN and telco ability to provision new capacity.
Codec profiles and specific encoding flags are outside the scope of this article, as is the choice of codec for your future needs and target audience. There are several industry articles on encoding efficiency.
If you don’t have this knowledge in-house, engage a media consultancy team, such as Akamai’s Global Consulting Services (GCS), to review your business objectives and suggest ways to integrate objective perceptual quality into your workflow.
We Need to Move Away From Quoting Bitrate as a De Facto Indicator of Encoding Quality
Bitrate is only a relative indicator of quality when all other encoding variables are constant.
Recent objective quality methods such as SSIMplus and VMAF take the target device and viewing environment into account, giving more meaningful figures than PSNR, which can be skewed easily.
In 2013, I updated a transcoding engine for multi-platform VOD. I targeted 8 Mbps for a 4K “Tears of Steel” video as one of the adaptive bitrate ladder steps because I knew I could do 2 Mbps for HD for viewing on a large tablet or computer monitor. The company I worked for didn’t have a large-screen 4K TV for testing, so I worked with a 28” 4K monitor.
I also knew that, at a push, I could do HD at 1.4 Mbps, which meant, for certain titles, on specific devices, UHD could be done at 5-6 Mbps. Maybe not high-action source material with constant scene changes, but definitely certain titles.
I could justify twice that bitrate using a perceptual quality metric, as my business goal was “good enough, for minimal distribution OpEx costs.” Admittedly, I used Mark I Eyeball, my subjective method for quality measurement. But if I was doing the same project today, I’d use an objective quality score like adjusted PSNR, SSIMplus, or VMAF.
A recent Akamai/Eurofins report2 showed how perceivable quality varies by viewing device. I wouldn’t necessarily advocate 8 Mbps as a guideline for a UHD title when viewed on a large-screen TV. But a lot of OTT services have grown from SD to HD to UHD without metrics to justify their encoding ladder, so there’s a good chance the encoding ladder is focused on bitrate rather than perceivable quality.
By using the same encoding principles and objective quality measures — and taking the target device and viewing environment into account — you can deliver exceptional-quality action movies and even live sports to more customers.
Delivering a “Better Than Broadcast” Experience Through Metrics, Not Vanity
Today, OTT can deliver higher bitrates — and therefore quality — than existing over-the-air (OTA) services. So we shouldn’t shy away from delivering high bitrates that enhance resolution, frame rate, and dynamic range demand. This will ultimately lead to an exceptional “better than broadcast” experience.
But business metrics, based on more than vanity bitrates, should drive that decision.
For instance, video codecs are very good at preserving details. However, modern cameras introduce sensor noise, which the codec treats it as background detail that needs to be preserved. This directly impacts efficiency and pushes up the bitrate unnecessarily.
You can encode efficiently at lower bitrates if you start by cleaning up the source material. Whether you receive multicast transport stream or SDI into your encoder, the extra processing power you use to denoise will pay dividends along the supply chain — lower bitrates mean lower transit costs and lower storage needs, even if your edge traffic bills at a flat rate.
If your encoder doesn’t come with source filter options to remove sensor noise or automatically denoise, look for edge-preserving smoothing or spatial low-pass video filters. Even the lowest setting can benefit the codec’s ability to compress.
Evaluate the Devices You Actually Deliver — Not the Ones You’re Targeting
You want to avoid a situation where less than 5% of your audience dictates a less efficient profile for everyone. Long gone are the days when you needed to default to baseline profile to ensure you can reach your target audience.
A simple device check on the edge, and a redirect to a manifest containing a subset of the encoding ladder, will allow you to use a more efficient profile for the rest of the audience. Manifest manipulation/personalisation saves you from delivering top-rung renditions where the perceivable quality gain is minimal for the device.
The same logic can also be applied to your top-end bitrate across the board. You might be encoding at 50 or 60 fps because a tiny percent of your audience is watching sports on a large-screen TV with a high refresh rate. If you asked that audience to pay a premium for the higher frame rate, would they? Or would they be content at 25 or 30 fps?
Unnecessarily High Frame Rates Can Also Hit Your Infrastructure, Delivery, and OpEx Costs
The arguments against vanity bitrates also apply to vanity frame rates.
Making a business decision to remove every other frame reduces the amount of work the codec has to do on a live feed. This means it has more CPU headroom to encode efficiently in the time given, leading to savings across the board.
Manifest manipulation, either on the edge or via your content management system (CMS), allows the higher frame rate rendition to be delivered only to those customers who would see benefit.
Remove High-End Bitrate References When the Quality Gain is Negligible
I think most people accept that per-title encoding, or per-scene encoding, is better than a generic fixed bitrate ladder. For VOD, this is clearly the way to go. Post-encode SSIMplus or VMAF analysis (or similar), along with manifest manipulation, can remove high-end bitrate references on a per-fragment basis when the quality score gain is negligible.
Perceivable quality adjustment for dynamic VOD encoding ladders is already built into some VOD workflow tools. However, implementing this for live OTT services, either on the encoder or as a CDN pass-through tool, will require some work. At the very least, it will require sacrificing some latency.
You’ll need to make a business decision: Is the drive for lower latency worth the time something like this would need to do its job?
Use High Bitrates When the Business Case Justifies It
Personally, I’m not opposed to high-bitrate encoding for streaming; 30+ Mbps for UHD or 100+ Mbps for VR can be justified. But so can 8-10 Mbps, with the right business argument backed by objective, perceivable-quality results.
The Eurofins/Akamai marketing study mentioned earlier has shown the experience sweet spot for HD content is higher than the current average OTT bitrate used for HD; there is a solid argument for increasing the bitrate in most cases. And with flat-rate CDN billing, there is less of an argument against it.
Regional intelligence about connection and throughput speeds, such as those regularly published by Akamai, along with telco capacity, can also influence your encoding ladder.
There’s no reason to create obstacles for yourself by encoding at high vanity bitrates for the sake of it. By creating an encoding ladder based on perceivable-quality measurements for devices and environments, not bitrates alone, you can consistently deliver excellent concurrent viewing experiences to all your end users — even as your live content gets more popular and more regionally targeted.
Here are my proposed best practices for your live OTT service:
● Assume the popularity of your content can be an order of magnitude larger than your predictions, and base encoding ladder decisions around those traffic levels
● Use available infrastructure and regional throughput intelligence to bracket your ladder for a specific region
● Revisit your encoding ladder rungs using stepped perceptive quality metrics as a guide, rather than purely bitrate (e.g., SSIMplus/VMAF)
● Ensure your bottom rung is low enough to still serve an adequate experience in the event of localised congestion outside of your control (e.g., if you are not delivering within a dedicated MCDN [managed CDN])
● Customise the top rungs using perceivable-quality targets by environment and device (not just device capability) and limit the delivery of your rungs to those accordingly, either by manifest manipulation or CMS targeting
By regularly revisiting your encoding ladder based on these best practice proposals, it should ensure a smoother rollout and ongoing quality of experience for your OTT customers, especially over shared infrastructure.
1 https://www.akamai.com/uk/en/about/news/press/2018-press/akamai-hits-new-high-for-peak-web-traffic-delivered.jsp (72 Tbps peak, 10.39 million concurrent for IPL)
*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Del Fowler. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/4n6YOuOM6BI/do-high-vanity-bitrates-choke-your-live-ott-service-out-of-the-gate.html