No matter the industry, adding AI technologies to work processes has made a lot of employees nervous. There is fear that automation is taking over and that robots will eventually push out the human workforce. To quell these fears, it’s the responsibility of business decision makers to instead nurture the relationship between humans and AI/ML technologies and show how technology can help make the worker more productive and decrease burnout.
This is especially true in cybersecurity, where AI/ML technologies are built into security systems to detect, for example, anomalies in user behavior patterns and logs—the kind of thing that is necessary for good security posture, but nearly impossible for humans to manage alone.
It’s important to foster an atmosphere where AI is seen as a tool, just like any other security tool.
“Just as automation and cybersecurity products in general do not replace people, AI is not going to replace anyone,” said Tyler Shields, CMO at JupiterOne, in an email comment. AI techniques improve technology efficiency by taking some of the more mundane, rote and basic tasks away from analysts and the security team, freeing up humans to perform higher order analysis in new ways.
“AI, ideally, enables our most intelligent resources—humans—to deliver at the highest level possible,” Shields added.
Fitting AI in with the Security Team’s Strategy
Adopting AI/ML into the cybersecurity system should be approached as any other new technology—first, by determining how and where it will best benefit the company. These tools should integrate seamlessly with current systems and should be designed to fit in with the security team’s overall security strategy.
Another point to consider is the skills of the current security team. Because you can’t just drop AI into your security toolkit and expect it to meet your objectives flawlessly, you’ll have to have someone who is able to train and staff the AI. Rather than replacing humans, AI requires a human partner with specialized skills. As you build AI into your system, you’ll also need to build the expertise among your team.
Set Achievable Goals
Most organizations aren’t prepared, at least on the human side, to build the AI systems they want to have, so it is better to start with the systems you need to have. Whenever attempting an AI task, it’s important to start with the end result in mind, Tim Wade, Technical Director, CTO Team at Vectra, suggested in an email comment. For example, in the case of AIOps for security, that means developing a clear vision for how it will be used versus what humans will do. This is best demonstrated by identifying routine, repeatable tasks that involve large amounts of data that humans are currently handling and looking for ways to offload those onto machines.
“Hint: The AI is going to empower, not replace, humans,” Wade said.
While AI takes care of the mundane, humans should focus on tasks that suffer from ambiguity and require traits like context, judgment and ethics. An example of this is using AI to identify high fidelity signals that should be investigated, then pass those investigations to humans.
“Once the humans have ‘solved’ some degree of ambiguity in the investigation,” Wade explained, “It’s time to pass analysis back to the machine for another iteration; repeating the process until a tangible outcome is achieved.”
AI has become such a buzzword in the cybersecurity industry that many security teams feel they have to add in some component or fall behind. Some organizations simply aren’t ready to add these tools to their security systems, because they don’t have the manpower in place or lack designed goals for the AI to accomplish.
“The expectation shouldn’t be for AI/ML to replace the security operators, or to be ‘smarter’ than them,” said Erkang Zheng, founder and CEO at JupiterOne in an email comment. “Rather, I believe the job for AI/ML in the foreseeable future is to help security professionals be more efficient at covering the basics and providing automated data points to help them make better decisions easier and faster.”