Cybersecurity and AI: Not Partners Just Yet
There may be a lot of interest in applying artificial intelligence (AI) to make up for a chronic shortage of cybersecurity skills, but it turns out a highly fragmented cybersecurity landscape is making it extremely difficult to aggregate all the data required to train those AI models.
At the recent GPU Technology Conference hosted by NVIDIA in Washington, D.C., multiple speakers made it clear that the use cases in which AI might be applied to cybersecurity likely won’t be moving past automating some basic and rote cybersecurity functions anytime soon.
Iain Cunningham, vice president for intellectual property and cybersecurity at NVIDIA, told conference attendees AI clearly has much potential to identify outliers that are indicative of a cybersecurity compromise. However, aggregating all the security data required to drive the AI models that need to be built and then updated continuously is proving to be difficult, he said.
As a result, the cybersecurity industry isn’t making as much AI progress as initially anticipated. In fact, Joshua Patterson, director of AI infrastructure for NVIDIA, said the cybersecurity sector is starting to fall behind other areas where AI is already starting to be applied at scale. The cybersecurity industry needs to find a way to band together to address the data aggregation issue in a meaningful way, he noted.
Specifically, Patterson said much work needs to be done in terms of building not only high-quality data lakes but also the pipelines to access that data in a way that enables quick response to rapidly changing events, such as so-called “deep fakes” or fictitious digital personas.
Coleman Mehta, senior director for U.S. policy at Palo Alto Networks, said it’s clear more software is necessary for what is going to be a software fight. The challenge is while most security tools are based on deterministic tools that are easy to understand, the next generation of cybersecurity AI tools will be more “probalistic” in that they will be based on behavioral algorithms that organizations will need to have confidence in, noted Mehta.
Most organizations, however, are a very long way from achieving that goal. For example, Robert Hale, a Lockheed Martin Information Assurance and Information Operations Fellow, noted Lockheed Martin is just beginning to advance to the next stage of cybersecurity AI after having run experiments using commercial processors. This next stage will incorporate NVIDIA GPUs first to identify what is normal within a highly distributed IT environment and reduce the time to compromise identification. Once that’s accomplished, Lockheed will move on to evaluating resiliency, then building AI models that are at least as fast as humans and eventually building a cyber-digital twin of the IT environment that needs to be secured, he said.
Obviously, given the effort required, it may be quite a while before most organizations are routinely exploiting AI to combat cybersecurity threats. However, the scope of the threat coupled with a chronic shortage of cybersecurity expertise means there is no real alternative. Investments in cybersecurity AI research and development will continue as much as an article of faith and hope as a practical element of any cybersecurity strategy.