In this series of blog posts, we have been analyzing the major forces that are reshaping the way the industry approaches application security.In this post, we lay out the foundations for a new approach to security that not only solves the problems of the old approach to web application firewalls (WAFs), but also addresses the new challenges posed by the changing application and threat landscapes. In case you missed any of the previous sections, you can check them out here:
The Application Landscape – The move to a web-by-default model for applications and what it means for the enterprise attack surface.
New Development Strategies – Analyzing the impact of DevOps and new architectures such as microservices for security teams.
The Threat Landscape – A look at how threats have evolved and some of the challenges to traditional application security.
(Reliable) Action is Required
As we have seen in the previous articles, enterprises have far more applications exposed to the Internet than ever before, and those applications are facing an unprecedented volume and diversity of threats. This makes it even more essential that WAFs begin to actually deliver on their stated purpose – to detect and block threats.
Obviously, the ability to reliably block threats has been a challenge for WAFs for many years. Security teams typically have to spend time and effort constantly tuning rules and signatures in order to manage false positives. Alternatively, many WAFs have been deployed to “listen only” to detect potential threats and rely on staff to follow up and investigate potential issues. The problem is that both of these approaches rely on human effort to scale. This approach was painful even when organizations only had a few public-facing applications, but it has become impossible when virtually all applications are public-facing. To keep pace, application security solutions simply must be able to deliver real-time action, while vastly reducing the number of false positives and false negatives that have been the norm in the industry.
Abnormal Doesn’t Always Mean Bad
If tuning signatures and rules was the bane of AppSec existence, there was hope that applying machine learning to application monitoring might save the day. And to be clear, there is considerable value in this approach. Using ML and AI models, we can recognize the normal behavior and usage patterns of applications and recognize when something deviates from established norms. This is definitely a good thing. The fact that it doesn’t require a human to do the learning at scale is even better.
The problem is that anomalies don’t always mean that something is bad. Anomalies can be just..strange. And knowing the difference between malicious and anomalous often requires a human analyst to investigate. This exact same history has recently played out in other branches of cybersecurity where machine learning detection models, UEBA, and advanced analytics were all supposed to save the day of security. The problem was not that these technologies didn’t work, but that it took a human to investigate the results and decide what to do. If we use these techniques alone, we quickly wind up in a very familiar false positive vs false negative conundrum. Since there isn’t enough manpower to investigate every threat, security teams often revert to setting their thresholds high to only catch the most egregious offenders, while ignoring everything else that stays under the radar.
The Rise of Attacker-Centric Security
Threat X takes a new approach to the WAF to deliver reliable, high-confidence detections. To do so, we combine 3 important contexts:
Inward Application-Facing Analysis
Outward Attacker-Facing Analysis
Active Attacker Engagement
This approach combines intelligence and action together in a completely automated way. The first major difference is that we apply machine learning and AI to both applications as well as attackers. So, in addition to learning the signs of an attack in the application, we also learn the unique behaviors and characteristics of attackers without the need for signatures. Much like a medical diagnosis, analysis of an application can reveal symptoms of an attack and reveal what components are being affected. Instead of a fever, we might see an abnormal number of login attempts or an overburdened application resource. However a physician would look at symptoms and, of course, attempt to identify the specific illness that is causing it. For an application, attacker-centric analysis gives us this answer. Using attacker-centric analysis we can pinpoint the malicious behavior, see what sort of threat is causing the problem and who we need to block. We can track the progression of an attacker or campaign across the kill-chain and identify his techniques, tools, and procedures. And while intelligence is great, there is a limit to how much you can learn passively. In order to be certain you need to engage with the threat. Threat X takes this critical next step and actively interrogates suspicious visitors, then fingerprints and tracks them over time. The solution applies a wide array of injection, manipulation, and active deception techniques to further analyze and verify, and ultimately block threats.
This is, of course, a very high-level introduction to the overall approach. However, it hopefully shows the philosophy of combining application analysis, attacker-facing analysis, and active attacker engagement as a way to deliver high-confidence, actionable WAF decisions. If you would like to learn more about how Threat X works request a personal demo with an expert or sign up for one of our monthly live demo sessions.
*** This is a Security Bloggers Network syndicated blog from ThreatX Blog authored by Jeremiah Cruit | CISO. Read the original post at: https://blog.threatxlabs.com/the-rise-of-the-attacker-centric-web-application-firewall-waf