In the first week of March, Radware was at RSA Conference 2019 in San Francisco to discuss web application security with industry pundits. Many of these executives at RSA acknowledged having firsthand experience dealing with “bad” bots. The conversation was interesting with topics ranging from types of attack they face from bad bots to approaches opted by attackers — and how they deal with these attacks.
Most of these executives admitted deploying an in-house mechanism/solution as their first response to deal with bot attacks; some even said that they still have an in-house solution to deal with bot attacks. Next, we headed to Adobe Summit, then to Phocuswright, ICMA, and GOMS, at all these conferences, we kept discussing with executives what their first response was to bad bots. One thing was common in all these meetings: these businesses first deployed an in-house solution and later replaced it with a dedicated solution.
After these discussions, we had two questions: why do in-house bot management solutions fail and why don’t businesses deploy a dedicated bot management solution as soon as they find out about bot attacks, despite knowing the risk a bot attack involves including security breach?
To uncover why, we researched various types of in-house bot management solutions and their pitfalls.
Types of In-House Bot Management
We found that there are four types of in-house bot management solutions that organizations deploy.
Manual Log Analysis: Used to manually prepare a list of suspected IP addresses. Suspected IPs are then blocked through access control list of WAFs or SIEM tools to prevent them from accessing web applications.
Rate Limiting: Organizations limit the number of visits from one IP address. Rate-limiting based solutions work based on predefined rules.
Basic Fingerprinting: Basic fingerprinting-based solution that collects IP- and header-centric information to identify and block malicious bots.
Advanced In-house Bot Management: Advanced in-house bot management solutions are built using in-house data while leveraging basic machine-learning models.
Security researchers from ShieldSquare studied the traffic of organizations that had deployed inhouse bot management and found that these solutions are doing more harm than good. As figure 1 (below) highlights, these in-house bot management solutions cease to detect actual bad bot traffic.
In this blog, let us explore more about advanced in-house bot management solutions through various case studies.
A Case Study of Advanced In-house Bot Management
Security researchers from Radware wanted to understand how businesses are leveraging machine learning to develop advanced in-house bot management solutions. They studied the traffic of a group of businesses that earlier attempted to build their in-house bot management solution before deploying a dedicate bot management solution, and the results were astonishing.
Figure 2: A Case Study of Advanced In-house Bot Management Solution
As mentioned in Figure 2 (above), against 22.39% of actual bad bot traffic, advanced in-house bot management solutions were able to detect only 11.54% of bad bots. Not only these solutions failed at detecting most of the bad bots, of 11.54% that they detected nearly 50% were false positives. Upon realizing that their attempt to build an in-house bot management solution failed, these organizations moved to dedicated bot management solutions.
To better understand the reasons behind why these businesses moved to a dedicated bot management solution, our security researchers decided to study the reasons that caused the failure of these advanced in-house bot management solutions.
Pitfalls of Advanced In-House Bot Management
PIn-house bot management solutions struggle to understand distinctive user behavior and result in high false positives and negatives, resulting in poor user experience. Here are a few points that explain how advanced in-house bot management solutions erroneously cause false negatives and positives.
Higher False Negatives: The in-house bot management solutions are not optimized to consider various factors when analyzing the traffic on a website. For example, stopping sudden surges in traffic, low and slow hits, mutating bots, etc.
Let’s take a closer look at what happened with organizations that tried to stop bots through their in-house resources and how their approach caused false negatives.
Consider a case of a credential stuffing attack on an e-commerce firm. The attack was executed using a combination of different techniques to bypass security measures while masquerading as genuine users. The attackers created a pool of 20,106 IPs distributed across 32 domains, 27 geographical locations, and 126 ISPs and combined that with their exploit kit to evade detection. With these attack methods, attackers were able to carry out 1,033 unique URL hits on the login pages to perform a credential stuffing attack (see figure 3).
In-house bot management solution of this e-commerce firm failed to detect such a vastly distributed large-scale attack due to the sophisticated techniques the attackers used.
The other way through which cyber criminals execute sophisticated attacks is a low and slow attack. The graph in Figure 4 (below) shows two plots of bot hits versus IP addresses used in a scraping attack on an e-retailer. The first plot shows basic bots that are easy to detect. The second plot, in contrast, shows how sophisticated bots use thousands of IP addresses in one attack instance — operating low and slow — to stay under the rules of advanced in-house bot management used by this e-retailer.
Using this low and slow technique, attackers scraped product information and pricing details of 651,999 products from 11,795 categories of this e-commerce portal while an in-house bot management solution was on duty.
Higher False Positives: In-house bot management solutions don’t consider domain-specific human behavior. For example, on portals with live content, such as news portals and social media sites, some users tend to spend more time scrolling through their feed or browsing the website, and they have comparatively higher session times than the users of websites with relatively static content. We observed that time-series regularity detection of in-house bot management solutions identifies them as bots.
In-house bot management solutions also lack global threat intelligence and rely on in-house data that hampers their ability to identify sophisticated human-like bad bots. On the other side, dedicated bot management vendors serve thousands of global customers. The firms collect data from end users’ devices to build a comprehensive database of bots (both good bots and bad bots) and human fingerprints. Bot management firms leverage this database to improve their bad bot detection and mitigation capabilities dynamically.
Organizations that deploy in-house bot management solutions often encounter a clash of commitments as bot management is not their area of expertise. Developing and maintaining an advanced bot management solution requires deep industry expertise, in-depth domain experience, and continuous improvement on detection capability and intelligence, which is only possible through a dedicated vendor.
Bot management is a very niche space and requires comprehensive understanding and continuous research to keep up with cybercriminals. Businesses must deploy a dedicated bot management solution to effectively manage bot traffic (both good bots and bad bots) without affecting user experience.
Read “The Ultimate Guide to Bot Management” to learn more.
Pavan leads the Bot Management Solutions business at Radware. With decades of experience, he drives strategy, marketing and product functions for Radware Bot Manager. Pavan is a co-founder of ShieldSquare (now Radware Bot Manager) and ArrayShield.
*** This is a Security Bloggers Network syndicated blog from Radware Blog authored by Pavan Thatha. Read the original post at: https://blog.radware.com/security/2019/10/why-in-house-bot-management-solutions-are-unreliable/