In a world where malicious bots roam the internet like hungry lions seeking vulnerable applications to devour, application owners are forced to make tough decisions between streamlined, user-friendly workflows and the need to interrupt user experience with bot protection techniques like CAPTCHA. We’re all tired of the user-experience of having an extra bit of work to do when filling out and submitting web forms. Who’s got time for that? However, malicious bots can and do cause real harm for countless organizations, especially as credential stuffing and account takeover attacks grow more and more sophisticated.
This reality makes it all the more critical for security teams to distinguish between valid users and the hordes of bots continually trying to worm their way around and into an application. Unfortunately, this goal tends to get pretty messy on the frontlines. CAPTCHA offers one way to distinguish between bots and humans, but it comes with lots of problems. It introduces friction with valid users by putting all the responsibility on them to prove that they are worthy of using the application or site. And as bots have gotten smarter, CAPTCHAs have gotten more challenging to solve for bots and humans, which requires more Customer Effort (CES), lowers Customer Satisfaction (CSAT), and can ultimately cause the user to give up and abandon the app altogether.
ThreatX gives organizations a far more effective and user-friendly approach to controlling bots. It extends protection and monitoring across the entire history of an entity’s visit, instead of only focusing on form pages. Likewise, it uses a combination of the best detection techniques, including behavioral analysis, fingerprinting techniques, transparent forms of interrogation, and a wide range of other factors that contribute to the overall understanding of a visitor’s risk. It can also work with CAPTCHA, turning it from a blunt tool applied to all users to a surgical instrument only applied to a fraction of edge cases that can benefit from it. Let’s take a closer look.
The CAPTCHA Arms Race
CAPTCHA has been around for so long that it is easy to forget that it is an acronym. And in an age full of technical acronyms, CAPTCHA is about as grisly as they come – Completely Automated Public Turing test to tell Computers and Humans Apart. However, this acronym is a good reminder that a CAPTCHA is a Turing test (technically a reverse Turing test) that distinguishes humans from artificial intelligence.
However, CAPTCHA, reCAPTCHA, and its derivatives are less of a barrier to AI and more of a yardstick to measure its progress. As soon as CAPTCHAs were introduced, developers started creating programs and intelligence to solve them automatically. By the mid-2010s, AI was able to solve CAPTCHAs with between 90% and 99% accuracy. And as the machines got smarter, the tests for humans had to get harder. Instead of typing a few characters, users needed to go through multiple steps; click a button, solve a puzzle, pick all the images that contain a school bus, etc.
The core problem is that as AI gets smarter, humans have to do more to prove themselves. And while bots never get tired, customers do. ThreatX introduces new technology that can separate bots from humans in a completely transparent way. And while CAPTCHA can still play an important role, using ThreatX can vastly reduce how often you need to challenge your visitors.
Viewing the Long Tail of Risk
ThreatX brings a variety of diagnostic and detection capabilities to the fight against bots and malicious automation. Application behavioral analysis, attacker-centric analysis and profiling, active interrogation, IOCs, traditional signatures, and more contribute to a unified and real-time understanding of risk.
However, just as importantly, this collection of techniques is continuously running over time. ThreatX algorithms fingerprint and track an entity’s behavior and interactions across multiple pages, sessions, and visits. And this is a big deal when it comes to CAPTCHA and bots. CAPTCHAs are point-in-time controls most often deployed for login and form pages. As such, the sun typically rises and sets on the form page, and there is no other context from which to work. The bot shows up on the form page, and it’s a duel between two pieces of code to see which one is smarter on any given day.
ThreatX can fundamentally change this dynamic. Since each entity is fingerprinted and continuously analyzed for up to 90 days, ThreatX has far more context and will typically determine if a visitor is a bot or human before they ever reach the form. CAPTCHA can still be applied when necessary, but as the exception rather than the rule, thus quickly reducing CAPTCHA’s general use by 90% or more. This reduction eliminates barriers for users and ultimately translates into higher CSAT.
Naturally, these techniques are always evolving in response to the bot and automation landscape, and they provide a critical piece of the overall picture of risk. When other, more passive detection methods may not be conclusive on their own, interrogation ensures that ThreatX can be proactive about finding an answer. And by providing aggressive tests for the bots, ThreatX again reduces the need to test the humans.
Making CAPTCHA a Surgical Tool
The war between bots and applications certainly shows no signs of slowing down, and both sides will naturally continue to evolve. Amid this fight, there are things that organizations can do to put material pressure on bots rather than users. Each application will naturally be unique, and CAPTCHA can still play a valid role based on an application’s specific needs and threats. A unified approach to AppSec can help organizations ensure that CAPTCHA is a last resort tool that is applied surgically, allowing organizations to hit that magical combination: protect their users and apps and improve satisfaction.
If you’d like to see a demonstration or learn more about the ThreatX solution, please contact the team at firstname.lastname@example.org.
*** This is a Security Bloggers Network syndicated blog from ThreatX Blog authored by Bret Settle. Read the original post at: https://blog.threatxlabs.com/protecting-users-from-friendly-fire-in-the-war-on-bots