The Promise of Machine Learning vs. The Reality of Human Assisted Learning

Machine Learning (ML) has been around in one form or another for a long time. Arthur Samuel, started working in the field in 1949 and coined the term in 1959 while working at IBM. Over the years, ML applications have been developed in practically every industry sector.

Recently, we’ve been hearing a lot about “silver bullet” ML-based cybersecurity solutions that can single handedly and automatically enable short-staffed security teams to identify and mitigate every kind of security threat imaginable. Of course, silver bullet solutions are as old as security itself, and by definition, they’re almost always too good to be true. So is the current crop of ML-driven cybersecurity solutions real or hype?

Given that a lot of hype has a few grains of truth in it, let’s use this post to look at the promise, the marketing hype, and the reality — at what ML can do and cannot do in its current state (with a peek at what it might be able to do sometime down the road). (Spoiler Alert: The operative word in this blog’s title is “promise.”)

The Good and The Not So Good

The good news is that ML has numerous positive capabilities — things it can do exceptionally well. Specifically, it can gather huge amounts of data from various locations and systems across an organization much faster than humans can. With the right algorithms, it can surface “meaningful” data. To mash-up some phrases from signal processing: Machine learning is all about taking out the noise present in big data and isolating the signal hidden inside. ML can then give that signal to humans to parse and act on.

The not so good news is that, while ML can gather huge volumes of data at a speed that’s impossible for humans, it can’t make logical or intuitive leaps (i.e., it can only do what it’s been taught). The fidelity of the generated model highly correlates with the quality of the training data. New behavior will generate false positives, and intelligent attackers can mask their behavior in the noise to generate false negatives. This makes ML in its current state great for building flashy new widgets or narrowly defined pieces of functionality, but maladapted for general purpose security. You’re not going to be replacing your SOC analysts with a giant matrix anytime soon.

So while ML is better than a SOC analyst when it comes to processing large amounts of data and for separating signals from noise, the advantage for signal interpretation (i.e., translating process signals into actionable intelligence) still goes to the humans. This means that ML can be a valuable tool for reducing the amount of time that’s needed to gather data. It can be taught to gather data that is relevant and cut the amount of time that analysts need to turn alerts into actionable intelligence, but it cannot replace the humans who are needed to conduct the analysis.

ML and AI still have a long way to go. According to Gartner (“Top Security and Risk Management Trends,” April 2018) and others, machine learning cannot yet replace humans, and human input is still essential to parse data and take appropriate action.

Human-Assisted Learning

ML functionality can provide precise signals into next-generation IDS, but it cannot drive omniscient, all-seeing, all-wise cybersecurity solutions by itself. We all want a solution that’s “set it and forget it,” but while an ML system can be taught to grow and adapt over time, it does not really exhibit intelligence.

It’s only when human analysts get hold of the data that meaningful conclusions can be drawn and appropriate actions be defined. At present, the best cybersecurity combination consists of humans and ML tools, drawing on their separate but complementary strengths. As Gartner puts it (“Fighting Phishing — 2020 Foresight,” July 19, 2018): “We can’t escape the fact that humans and machines complement each other and together they can outperform each alone. ML reaches out to humans for assistance to address intent uncertainty. ML aids humans by supporting administrator awareness and providing assistance to higher-tier SOC analysis.”

When you put effectively designed ML together with humans, you can create a powerful workflow that combines rapid data collection and separation of signal from noise with critical thinking and situational awareness — things that only humans possess. ML can gather the information that humans then use in order to define an organization’s risk profile, establish full stack cloud security observability, and determine the actions that are required to mitigate threats or remediate attacks.

A Few Last Words

Given the speed at which threats are changing and the ever-increasing volume of attacks, people want solutions that can keep pace with the volume of data that needs to be gathered, the speed at which it needs to be processed, and the changing nature of the data. As is so often the case, humans want a panacea — a silver bullet. Despite the marketing hype and promises, in the world of cybersecurity, ML is only a facet of the solution and not the solution itself.

According to Gartner (“Gartner Top 6 Security and Risk Management Trends For 2018,” June 4, 2018), “By 2025, ML for aspects of security will be a normal part of security practices and will start to offset some skills and staffing shortfalls.” ML is here to stay, and has grown more powerful as we’ve learned to harness it more effectively. But again, Gartner sees it only being applicable to aspects of security: It can help with many tasks, but it does not solve everything. Let ML help get rid of the noise, so humans can then do what they do best.

The post The Promise of Machine Learning vs. The Reality of Human Assisted Learning appeared first on Threat Stack.

*** This is a Security Bloggers Network syndicated blog from Blog – Threat Stack authored by Natalie Walsh. Read the original post at: