As today’s complex network environments have massive volumes of information coming in, IT and security teams are finding it more difficult to figure out what is an actual threat and what isn’t. Artificial intelligence (AI) can be a great ally to help pinpoint genuine risk in the midst of all that data—if applied correctly.
People do things with their computers that are at best ill-advised and at worst outright dangerous. For the most part, they do these things in ignorance. They click on interesting links that lead to malicious sites or download malware onto the system. They store sensitive information in unsecure places. Despite all the data breach headlines, an assumption persists that if you are able to do something on your computer, it must be okay.
As a result, the network ends up generating thousands of anomalies, which set off alerts on a daily basis. Security teams have to wade through all these alerts without the ability to tell the difference between what’s malicious and what’s not.
This is a huge time suck that is also unsafe. Your network’s security depends on its personnel’s ability to distinguish between the malicious and the non-malicious anomalies. AI and machine learning (ML) can be used to help teams identify which anomalies they need to be concerned about and which are benign.
AI is Not a Quick Fix
As noted earlier, though, AI and ML technologies need to be applied correctly: with forethought and a sense of how the technology can best support the IT and security team. You need a smart framework to focus on which anomalies or discrepancies matter most to your organization. Some providers recommend that your team focus on seven to 10 criteria for anomaly analysis and leave it at that.
This is a starting point, but you need to go further. You need to look at anomalies collectively to detect trends and coordinated behaviors. This goes a step further than focusing on those seven to 10 criteria. In fact, to implement true anomaly detection, it takes an adversary mindset.
This is a whole new way of looking at and thinking about network defense. Many solutions and security professionals are focused on figuring out which criteria are the most important in terms of anomaly detection. An adversary approach requires more holistic thinking: In what sequence and across what hosts do these anomalies fit together in such a way to resemble what an adversary might actually be doing inside of a network?
Adversaries have an ever-expanding repertoire of ways to get inside your network, but once inside, their campaigns must contain three elemental behaviors:
- Reconnaissance: moving around inside your network to learn about its structure and services and to locate valuable data.
- Collection: gathering and moving valuable data in preparation for exfiltration.
- Exfiltration: hiding the movement of data from the network to external destinations.
If you look at anomalies to see if they correlate with these behaviors, the true security picture emerges.
AI Done Right
It has quickly become a given that AI and ML will help you discover which security alerts are the most important. However, the hype has often not met with reality, casting aspersions on AI and ML. And some assume that implementing AI and ML eliminates the need for a human in the loop, which is inaccurate. AI accelerates the skill of humans who use AI tools, but the tools themselves cannot take the place of seasoned human professionals—nor were they intended to.
When it comes to network security, then, AI and ML are not tools that you leave on autopilot. But with an adversary-focused framework, security pros can ensure that what they’re actually looking for when it comes to analyzing anomalies is those that are truly malicious rather than those that are merely more or less important.
Spotting the Real Threats
Computers were made to serve humans, but humans are the weakest link in keeping them secure. All the random activity that seems safe causes a great deal of noise and confusion in the network. This leads to security issues, which vendors tried to fix by offering systems with security alerts to detect suspicious activity. But this just created another problem as IT and security teams are overwhelmed by hundreds or thousands of alerts each day.
However, AI and ML can help your teams look at the network from the threat actor’s perspective to spot the activities they must do to steal data and harm your network. In this way, teams know what to pay attention to—what is an actual threat in the sea of alarms. This eliminates confusion, focuses security expertise and keeps the network safer.