HUMAN Security Applies AI to Combatting Malicious Bots
HUMAN Security this week revealed it is applying artificial intelligence (AI) and data modeling to bot management as part of an effort to provide cybersecurity teams more granular insights into the origins of cyberattacks.
Bryan Becker, senior director of project management at HUMAN, said HUMAN Sightline dashboards will make it possible to detect, isolate and track individual bot profiles in a way that can be easily visualized and shared. Armed with those insights, it then becomes simpler for cybersecurity teams to share threat intelligence about specific bots, in a way that will enable organizations to better justify investments in bot management.
HUMAN, as a provider of a bot management platform, is already automatically applying this AI capability to thwart malicious bot activity. HUMAN Sightline provides cybersecurity teams with deeper insights into the specific efforts being made on their behalf, said Becker.
The overall goal is to enable security analysts to visualize the activity of individual bot profiles over time to better understand their sophistication and capabilities as they change and evolve over time, he added.
That capability makes it possible to, for example, easily determine if cybercriminals are targeting specific products or visiting a select set of pages at a glance.
Additionally, cybersecurity teams will find it easier to justify investments in bot mitigation efforts, said Becker.
Previously, bot management platforms would block traffic deemed suspicious. However, it was up to each cybersecurity analyst to determine if the traffic being generated was malicious or legitimate, as part of a concerted effort to raise the cost of launching cyberattacks, to the point where they become much less profitable to engage in, noted Becker.
There will, of course, always be cyberattacks aimed at specific high-value targets, but highly automated attacks being launched more broadly using bots can be more easily thwarted, he added.
It’s not clear exactly what percentage of cyberattacks can be traced back to malicious bot activity, but the one certain thing is cybercriminals are increasingly relying on automation. When a bot attack is blocked, cybercriminals will either move on to the next target or, if properly motivated, will continue to probe for other weaknesses. HUMAN Sightline makes it simpler for a cybersecurity analyst to track how the tactics and techniques being used by a specific bot are evolving, said Becker.
Arguably, one of the biggest beneficiaries of AI advances will be cybersecurity analysts who are charged with identifying patterns in a sea of cybersecurity data that has become too challenging for humans to manually process. AI technologies should not only reduce the overall level of toil experienced by cybersecurity analysts but also make it easier to identify issues that might otherwise go undetected for months or even years.
Of course, cybercriminals are also leveraging AI to both launch attacks and, increasingly, probe for weaknesses. Like it or not, cybersecurity teams are now caught up in an AI arms race that is only likely to continue to escalate.