Orca Security Adds Generative AI Asset Search Tool
Orca Security today added a generative artificial intelligence (AI) tool that cybersecurity teams can use to discover what assets they have running in their cloud computing environments.
Based on ChatGPT and Microsoft Azure OpenAI GPT-4 integration, the search capability Orca Security is providing extends its previous support for the REST interface Microsoft provides to enable organizations to launch prompts more efficiently.
Orca Security CEO Gil Geron said the goal is to provide a natural language interface through which cybersecurity teams and DevOps professionals can easily collaborate and use the company’s cloud-native application protection platform (CNAPP) to secure cloud computing environments. The overall goal is to democratize the discovery process as much as possible, he added.
In the wake of, for example, the disclosure of a zero-day cybersecurity vulnerability such as Log4j, an IT team could spend months looking for every instance of a vulnerable software component. In addition to enabling faster discovery of vulnerabilities, cybersecurity teams will be able to discover where personally identifiable information (PII) is located within a cloud computing environment, noted Geron.
At the core of the Orca Security platform is a SideScanning technology that examines block storage out-of-band and then cross-references with the application programming interfaces (APIs) exposed by the cloud service providers. This eliminates the need to deploy agent software in cloud computing environments. Generative AI now makes it possible to search the data collected by Orca Security to determine precisely where various software assets are running by discovering the appropriate stopped-status names for each provider.
It’s not clear whether the rise of generative AI will make it simpler for organizations to adopt DevSecOps best practices, but the amount of time DevOps teams spend looking for instances of vulnerable software and misconfigurations in cloud computing environments is considerable. Any effort to streamline that process should reduce the number of vulnerabilities that can be exploited along with the fatigue teams currently experience when searching for those vulnerabilities.
Going forward, cybersecurity teams should also anticipate taking advantage of summarization capabilities enabled by generative AI to produce reports, said Geron.
The challenge is that while generative AI significantly helps defenders, it’s also used by cybercriminals to launch more sophisticated cyberattacks faster than ever. Cybersecurity teams are already finding they only have a few minutes to limit the blast radius of any given breach, so reducing the amount of time required to discover vulnerable assets is critical. As such, generative AI—coupled with existing machine learning algorithms that can predict attack paths—are going to be crucial capabilities. This is especially important as cybersecurity professionals become increasingly expected to be able to respond to cyberattacks in near-real-time.
In the meantime, the overall size of cloud computing environments that need to be defended will continue to grow. It is already nearly impossible to manually track what assets are running where. The only way cybersecurity teams can effectively defend those assets is by leveraging AI tools that, hopefully, will reduce the current level of stress.