SBN

Akamai Edge Cloud: Scaling IoT, Part 1

The Internet of Things (IoT) ecosystem is an exciting emerging market that is disrupting the way we design infrastructure to support businesses. Smart devices, homes, cities, cars, and automation supporting the Industry 4.0 industrial revolution are all placing new demands on existing internet infrastructure. Gartner estimates that the enterprise IoT platform market will grow to $7.6 billion in 2024 with a 31% compound annual growth rate (CAGR).”

The Akamai Intelligent Edge Platform is one of the world’s largest distributed networks, delivering edge computing solutions for more than 20 years. Today, more than 1.3 billion devices access our platform every day. By 2025, we’re anticipating 20-50 billion IoT connections. In fact, IDC forecasts 41.6 billion connected IoT devices generating 79.4 zettabytes of data in 2025. As we look to the next phase of online growth, there’s a scalability challenge looming. 

Changing Online Traffic Patterns

Originally created to support an internet of people, the Akamai Intelligent Edge Platform distributes workloads from a centralized location close to where users and devices can consume it. This high-level traffic pattern is a fan-out model of one-to-many communication, which shaped the architecture that allows the internet to scale.

IoT Blog, 10.27_pic1.pngInternet of People (Content Delivery Networks Circa 2000)

Overlay IoT, and different traffic patterns form. Now data is generated at the endpoints of the internet, and collected and aggregated for processing and analysis in centralized public or private cloud data centers. This represents a many-to-one — or fan-in — traffic model, which is the inverse pattern for which the internet was designed. 

IoT Blog, 10.27_pic2.pngInternet of People Overlaid with IoT

As IoT applications increase in complexity, we’re seeing more direct device-to-device communication that is intensifying in speed and volume, producing a many-to-many traffic pattern. We also see that the ability to deploy application code uniformly to a global footprint increases velocity, scalability, and availability, while reducing the operational burdens for organizations. 

Evolving Application Architecture

Applications are evolving to meet new online demands. Legacy web applications consist of key building blocks (shown below) that make them simple to develop, test, and deploy.

IoT Blog, 10.27_pic3.pngLegacy Web Application Architecture (Source: microservices.io)


When centralized at the cloud, these applications are relatively easy to scale using autoscaling and elastic technologies. However, as applications become larger and more complex, the codebase becomes monolithic and more difficult to maintain. 

A monolithic code structure makes continuous integration and continuous delivery (CI/CD) and flexible feature development challenging. With applications locked in a common codebase, it is not possible to scale individual components to match the needs of a particular service or application. 

A microservice architecture addresses the agility requirements of today’s applications. Microservices empower multiple teams to work on independent pieces, enabling quick iteration and deployment.

IoT Blog, 10.27_pic4.png

Source: microservices.io

Computing at the Edge

Relying on a central location to respond to requests adds latency to microservices execution. Edge computing brings data, insights, and decision-making closer to the things that act upon them. A central location can be thousands of miles away, but the edge is as close as possible to the client. The goal is to create a reliable, scalable implementation so that latency doesn’t affect the flow of data, especially real-time data, impacting the purpose or performance of an application.

The Akamai platform moves application processing that would have taken place centrally to the edge to improve responsiveness and save costs. This capability is commonly referred to as serverless computing, since you can abstract scaling application processing from managing server infrastructure, either real, virtualized, or containerized. This reduces the amount of data and data center infrastructure needed to process IoT requests and alleviates the impact of bandwidth constraints in the middle mile.

IoT generates vast quantities of valuable data. Big data processing involves cleansing and sanitizing the data before using machine learning (ML) and artificial intelligence (AI) to generate analytics and derive insights for meaningful business impacts. Data preparation — collecting, cleaning, and organizing — already accounts for around 80% of data science work.

IoT Blog, 10.27_pic5.jpgSource: Forbes

IoT devices will generate too much data to process manually, requiring the automation and distribution of workloads. To be as efficient as possible, data should be processed as close to where it is generated. Most collection and sanitization can be performed at the edge to avoid incurring the cost of backhauling all data to the hyperscale cloud just to discard it, a concept referred to as data thinning. 

Data thinning ensures that only the best data makes it to the cloud where it can be properly ingested and used to train AI models that can derive valuable insights. Without adopting a distributed edge cloud architecture, solutions that require large quantities of high-quality data — such as enterprise IoT — quickly become impractical and costly.

Migrating Cloud Services

Cloud services are currently heavily centralized, making them easy to develop and deploy to a selected data center. However, scaling them to a distributed cloud model is very challenging. Replicating services across multiple data centers and ensuring that data is synchronized — while evenly distributing load and maintaining a global, real-time view of that data — result in costly integration and operational complexity. Migrating to a distributed cloud model allows edge computing to offload IoT requests from centralized cloud services for increased performance and reliability.

IoT Blog, 10.27_pic6.pngCentralized Cloud Services

Over the past two decades, Akamai learned a great deal about what it takes to operate a distributed network of resources at scale. We developed the technology, tools, and processes that allow us to manage highly distributed resources at the edge of the internet. Akamai maintains real-time visibility of all edge devices, which requires streaming vast amounts of data from the edge back to our Network Operations Command Center (NOCC), Security Operations Command Center (SOCC), and Broadcast Operations Control Center (BOCC). This edge-to-cloud exchange has the same traffic pattern characteristics as the fan-in of IoT traffic.

IoT Blog, 10.27_pic7.pngDistributed Cloud Services

As we look to manage an exponential increase in devices and traffic, we understand the need to further leverage the principles of distributed communication, processing, and storage to deliver on the promise of IoT.

Introducing Akamai Edge Cloud

Successfully scaling the online growth driven by the proliferation of connected devices, while providing the performance IoT demands, requires a comprehensive distributed technology stack, including security, messaging, and processing as close to the devices and data as possible. Akamai delivers the scale, resiliency, and security to help you meet your IoT cloud infrastructure needs. In my next post, we’ll examine how Akamai Edge Cloud can help you realize the potential of IoT.


*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Michael Archer. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/fjMIBe-hVu8/akamai-edge-cloud-scaling-iot-part-1.html

Secure Guardrails