Hybrid Systems: AI and Humans Need Each Other for Effective Cybersecurity

The sudden mainstreaming of chatbots and generative AI like ChatGPT has a lot of people worried. They believe this is the AI technology that will replace them. Fortunately, that’s not actually the case.

The more likely scenario is that humans will partner with AI to create a hybrid model of job roles. And this is especially evident in cybersecurity. AI is most commonly used to do tasks that humans aren’t able to: For example, perform 24/7 monitoring and find anomalies in behaviors and patterns within massive data sets. Once a possible threat is detected, it is up to the human to determine if what the AI has found is an actual threat versus a false positive and then decide the right directives for mitigation.

“AI is just linear algebra at scale; human beings decide the training sets and the features used to do that algebra,” said John Bambenek, principal threat hunter at Netenrich, in an email interview. And once it learns, AI can do some things faster than humans, like finding patterns in chaotic structures. But sometimes intuition is what’s necessary, and right now, that’s a big advantage that humans have over machines.

Chatbots, Humans and the Cybersecurity Program

AI has been an integral part of cybersecurity platforms for a while now, but the rise of the chatbot has changed the conversation. We know that chatbots can be used to create new threats and cyberattacks, but can it be used within the cybersecurity system to defend against attacks?

“ChatGPT and similar technologies such as Microsoft’s usage of OpenAI models for GitHub’s Copilot service are going to help break down barriers within cybersecurity,” said Justin Shattuck, CISO with Resilience Insurance.

Chatbots are a tool that can provide more detailed and actionable information that will help security analysts act more effectively and efficiently. Security teams are always fighting against the noise and Shattuck said he believes the technology around chatbots could be the tool that helps smooth out the chatter and separate legitimate signals from the noise.

“Being able to use an existing tool with an integration to a service that can help decipher, translate, decode and make sense of complex information is going to help reduce both the barrier to entry as well as the time required to act on incidents,” said Shattuck. “I believe this is going to create a shift in incident reporting and subsequently the effectiveness of security teams.”

For Now, AI Needs Humans

The increased implementation of AI within the cybersecurity program is frequently brought up as a way to address the skills gap and the shortage of skilled cybersecurity professionals. And there’s no denying that AI has done a lot to handle repetitive tasks that often lead to burnout.

But AI needs humans. Security professionals serve a vital role in supervising the behaviors and tasks that machines are asked to perform, said Shattuck. “I don’t see this oversight disappearing any time soon. As humans, we should seek to have machines, automation, robots, whichever flavor is required to perform the mundane tasks that get in the way of us doing what we do best: Think and solve complex problems that computers are not presently equipped to handle.”

And we already know that AI can’t do everything. “If AI were able to take over all cybersecurity functions, we’d have solved the spam problem by now,” said Bambenek. “Simply put, attackers have almost three decades of experience in fooling automated systems. This capability will not just go away.”

Ultimately, cybersecurity is a human problem that has been accelerated by technology, Casey Ellis, founder and CTO at Bugcrowd, pointed out.

“The entire reason our industry exists is because of human creativity, human failures and human needs,” said Ellis. “Whenever automation ‘solves’ a swath of the cybersecurity defense problem, the attackers simply innovate past these defenses with newer techniques to serve their goals.”

At the end of the day, it comes down to one thing, according to Shattuck. “Technology solves challenges that are mundane, allowing humans to focus on what they’re good at, strategically identifying what to tackle next, focus the machines and repeat.”

For that reason, it is unlikely that AI will completely take over cybersecurity functions. We’ll need human operators to bring intuition, creativity and ethical decision-making to cybersecurity programs well into the future.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba