Akamai’s Polymorphic AI Framework Preemptively Manages Bots

Too many security efforts react to threats as they come. While security teams often succeed through Herculean efforts, being constantly under siege takes its toll on your resources. The relentless barrage of bot attacks will eventually crack the human- and system-based methods to block or mitigate them.

Some cyberattacks, like distributed denial of service (DDoS), literally overwhelm systems with brute force. Other attacks, like malware and viruses, exploit the tiniest vulnerability to sneak in. Unfortunately, bots can do both. That’s why it’s important not to wait for bot attacks to hit your website before you stop them. 

Bot management can proactively “disrupt the disruption” — relieving the stress of reactive responses and allowing you regain control of your website. By proactively managing bot activity (there are helpful bots that you don’t want to block), you can keep harmful bots away from your business. 

To help you succeed in managing bot activity, Akamai Bot Manager uses a polymorphic artificial intelligence (AI) framework as the basis of our solution. The framework encompasses three core concepts:

  1. Akamai architecture differentiates our data collection and traffic visibility
  2. Machine learning (ML) leverages deep expertise and extensive online data to create the most accurate algorithms
  3. Data correlation provides a network effect that allows all customers to benefit from each customer’s bot detections

Akamai Architecture

Bot Manager is integrated into the Akamai Intelligent Edge Platform instead of attached at a single point (see our reference architecture below). Being in-line allows Bot Manager to see the traffic at the edge, where a user first connects to an application, providing the cleanest data on traffic volume, traffic patterns, and traffic types like human, bot, DDoS, virus, malware, and others. 

Another key benefit of being in-line is the ability to detect many bots at the edge, in motion, in the network. Other bot management solutions need to integrate into a content delivery network (CDN) at a single point as well as into the applications themselves, so you need to make changes to your applications before bot management can be implemented.


A third, and critical, benefit is the ability to collect traffic data as a first party. Across the network we see 11.5 billion bot requests per day and 280 million bot logins. We also have deep data sets on human behavior telemetry, allowing us to compare bots with humans as part of our detections. Other solutions rely on traffic and telemetry data from outside sources. 

As a result of being in-line as a first party, Bot Manager can track telemetry at the edge, closest to the user or bot request. That’s where the data is uncluttered by other kinds of traffic noise like the data at the origin. Most bots are stopped here and never get to the origin. But this session-level, edge collection also means Bot Manager can see bots in real time to deliver pre-cognitive detections — so if a harmful bot arrives at a login page, Bot Manager is ready to stop it.

Machine Learning and Data Science

Collecting “clean traffic” data across a wide distribution of data types and in large volume makes our machine learning (ML) algorithms more accurate. ML trains computers using algorithms to look for patterns in data to offer highly educated guesses into what the patterns mean. Many ML algorithms are open source, and creating algorithms is a relatively straightforward exercise today for data scientists. If the same open source tools are used to build ML algorithms, why don’t all bot management solutions detect the same numbers and types of bots? The ML is only as good as the framework it uses, algorithms it deploys, and data the algorithms analyze. 

Akamai threat researchers analyze 130 TB of new attack data every day. Bot Manager includes both supervised (taught to find patterns by following existing patterns) and unsupervised (automatic pattern detection among the given data points) algorithms. These highly refined algorithms identify and detect the most dangerous bots.

Across the Akamai network, we see traffic from 1.3 billion unique devices daily with record traffic of 164 Tbps. Extensive internet visibility allows our algorithms to learn more, faster, with a wider distribution of data to analyze. The telemetry and data improve the algorithms and return results in real time — a cornerstone of our polymorphic AI framework.

Correlating Data

Access to advanced algorithms and ML that detect the most sophisticated bots accurately is valuable across our entire customer base. However, individual customers also benefit from the network effect — where each additional participant in a network increases value for all other participants. Collective intelligence from a holistic view of bot traffic across all customers, helps protect you from harmful bots, even before they attack. 

We protect some of the largest, most high-profile companies in the world — which are often the target of the most advanced bot operators. For example, 8 of the 10 largest banks trust Akamai. Credential stuffing and account takeovers are a huge problem for financial institutions, and Akamai sees an average of 583.6 million credential abuse attempts per day. Akamai sees all  that unique traffic, collects the data about the newest bots, and algorithmically correlates the intelligence across all customers. 

If a new bot is detected attacking one customer, the data about the bot is added to the Akamai library and algorithms to protect all customers from the new bot. This network effect doesn’t just allow customers to mitigate bots effectively, it also allows Akamai to preemptively stop some bots from attacking in the first place. And since the Akamai platform is so vast, the network effect of Bot Manager is exponentially more valuable to each customer.

Consider the situation in which an enterprise sees suspicious activity in one geography. Bot traffic and bot signals aren’t universally the same. Akamai has more than 300,000 servers in more than 4,100 unique locations, which allows our algorithms to automatically correlate signals that vary based on location preferences. For example, human traffic from the United States using the privacy-protecting browser Brave will cause different traffic patterns than from Asia, where other browsers are preferred by those users. This means that we can correlate signals more precisely based on geographic data, which wouldn’t be possible without our ability to see traffic in all geographies. Correlating what Akamai knows about geographic differences in traffic with what we see from attack traffic at leading-edge customers, Bot Manager can make more accurate judgements about potentially harmful bot or legitimate human activities for each customer.

Polymorphic AI Framework: Preempting Costly Bot Impact to Your Business 

Akamai’s polymorphic AI framework allows you to relieve the constant pressure that comes from waiting around to see what attacks your site next. Instead, the polymorphic AI framework lets organizations detect the most sophisticated bots at the edge — stopping bot disruptions by proactively disrupting the bots first. 

Successfully detecting bots proactively requires more than an understanding of bots. To be effective over time as bots evolve, organizations need more sophisticated techniques that leverage expertise in related security areas like malware and DDoS attacks. The network effect of Bot Manager means you will also be protected from bots that haven’t attacked you yet.

There will be more opportunities to engage with us on this and more at Edge Live | Adapt. Sign up to see how customers are leveraging these improvements, engage in technical deep dives, and hear from our executives how Akamai is evolving for the future.

*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Christine Ross. Read the original post at: