Cybersecurity and Machine Learning/AI: What’s the Real Impact?

Will AI render human analysts obsolete, or be an extension that helps them be more effective? How about bad guys? Are we headed for an AI showdown? Here’s the lay of the land in AI territory now.

The buzz on artificial intelligence (AI) is deafening. Depending on who is hawking what, AI is either vastly superior to mere mortals, or it’s the machine version of a friendly helper not much smarter than its furry, doggy counterpart. So, which is it in cybersecurity? An autonomous protector preparing to give weakling human analysts the boot, or a computerized assistant willing to guard the perimeter alongside its master?

Further, is AI loyal, or will it turn on security pros and work for the bad guys instead? Will there be a good vs evil AI showdown?

Getting Real About AI

First, true AI is cognitive computing, meaning a computerized model that mimics human thought processes. That doesn’t exist yet. We’re not even close. So, no, the machine overlords are not coming to overthrow mankind or to take your job—not yet, and not in the foreseeable future.

But before you look at IBM’s Watson with a smug smirk on your face, remember that AI doesn’t have to be human to be better than us. It doesn’t even have to be fully realized AI. It could just be machine learning or deep learning, or any combination thereof.

DARPA’s perspective on AI, something that group has been a prime driver of, best explains where we are in terms of AI development. DARPA describes it in three waves: handcrafted knowledge (models built from human expertise), statistical learning (perceiving and solving problems in the natural world, but with less reasoning capabilities than the first wave) and contextual adaptation (systems build their own models of reality).

If you’d like more detail on the three waves and how each are used in cybersecurity, watch this DARPA video. And I do highly recommend you watch it, even if you’re very familiar with AI because it’s very specific to cybersecurity uses.

Danger, Will Robinson!

There are impressive use cases and results in that DARPA video. So does this mean that at least some human analysts are working on borrowed time?

“It depends on your opinion of human analysts and what we think they do; frankly, with regard to security alerts, most would agree that humans can’t do very much due to alert overload,” says Terry Ray, CTO at Imperva.

In other words, your job is safe if you are a crackerjack threat detector and thwarter. But there’s not much hope for “dismiss alert” button pushers in skipping the unemployment line.

AI will more accurately identify actual threats, and thus produce far fewer false positives. There won’t be much demand for button-pushing after that.

“One of the biggest challenges analysts fight every day is trying to identify signals in the noise. Artificial intelligence or advanced analysis could help in identifying interesting behavior and events,” says Kunal Anand, co-founder and CTO at Prevoty.

Even so, AI performs roughly the equivalent of the “Lost in Space” robot’s warning.

http://images.memes.com/meme/1065847

To do more than that, you’ll need human analysts to respond to the threat, or automation in place to deal with it. That master-slave relationship is unlikely to change for many years.

“As AI gets better, the quantity and quality of this automation will continue to improve; however, humans will always bring an overlay of human intuition that just can’t possibly be automated,” says  Tyler Shields, VP of Strategy at Signal Sciences. “Because of human intuition, we won’t ever see AI overrun humans 100 percent. Humans will always adapt and change.”

Battle of the AI Bots

Does this mean, then, that human analysts and AI will be like caped crusaders and their sidekicks, battling bad guys to the end? Umm, no. AI is software, and, as such, purchasable and programmable by all. Seriously. Did you not note the part in the DARPA video where the AI learned profanities from people on Twitter?

Yeah, AI is not your friend. It’s your slave. And his slave. And her slave—until it learns to blow raspberries at everyone and give them the finger for good measure.

So, lesson No. 1 is be careful what you teach your AI. Lesson No. 2 is that the bad guys are going to teach AI to be very, very bad.

“Hackers are already engaging with artificial intelligence in a malicious capacity. Artificial intelligence fraud bots, for example, have the capacity to create fraudulent documents, imitate handwriting and generate fraudulent conversations across social media apps,” says Ray. “These kinds of comprehensive attacks can be used to build up a fraudulent evidence base against an individual.”

Yes, that means an AI algorithm can predict not only your password for online banking, but also your passwords on every online account you have, from utility and financial institutions to your medical records, software subscriptions, data storage and email accounts. Majorly scary threat level.

Indeed, there isn’t much AI can’t discern about you from the many patterns in information we all create by our actions every day.

Eventually, the white hats will pit the good AI against the bad AI to try to stop these overwhelmingly sophisticated attacks. That part is inevitable. So what might that look like?

Alok Tongaonkar, head of Data Science at RedLock, a cloud threat defense company, outlined these as probable attacker uses of AI:

  • Use machine learning in a multitude of ways to pick potential victims and subvert defense mechanisms.
  • Analyze the response to their attacks and understand the defense techniques being used.
  • Inject data into the system that can make the machine learning models ineffective for detecting attacks. For instance, if the attacker can guess the kind of machine learning techniques being used, he or she can inject data that can cause a lot of false alerts. This can lead to alert fatigue for the security operator, who may disable the machine learning tool or just ignore alerts from the tool altogether.
  • Replace inefficient bulk spear phishing attacks with personalized, highly targeted spear-phishing attacks speedily and accurately fashioned with machine learning.

Total Impact

According to a recent Cylance Global Research Report, 77 percent of security teams have prevented more breaches with AI-powered tools and 81 percent say AI was detecting threats before they could. Further, 74 percent say they won’t be able to cope with the cybersecurity skills gap if they don’t adopt AI.

When it’s all said and done, AI is a necessity and not a luxury in any cybersecurity effort. But it’s not a matter of plug, play and forget. Human strategy is still key in everything from the strength of the algorithms to the determination of which machine learning tactics to layer and to what degree. Security is still a matter of human mind pitted against human mind, but the challenge is bigger and growing. Mastery and availability of new tools and weapons are a necessity—chief among them is AI.

Pam Baker