As the business world continues to find innovative ways to apply AI to accelerate growth and streamline operations, a similar trajectory of inventive thinking is emerging in the cyberthreat landscape. While only theoretical at this juncture, one of the big concerns about AI, when combined with quantum computing, is that there is a possibility that what is encrypted at present could be decrypted in the future. This is enough of a security concern to give pause to even the most experienced technologists. So, how are AI advancements, including generative AI, jeopardizing traditional data security techniques such as encryption and how can emerging new risks be mitigated?
The Relationship Between AI, Quantum Computing and Encryption
Encryption is a foundational element of data security, so the potential impacts of AI should be considered carefully. Overall, the unique attributes of AI, and particularly those of generative AI, have the potential to significantly improve the development of encryption algorithms themselves, the management of encryption keys, the control of access and administrative rights, etc. Due to its generative nature, generative AI can learn, adapt and ultimately produce net new outputs, raising the bar far above rigidly determined preset algorithms. What this translates to are exciting positive impacts to help security researchers, developers and operators to benefit from a new “ally” in their work.
At the same time, there are new risks because any powerful new technology can be used to benefit attackers. Malicious actors could push the currently established boundaries of data security by circumventing existing security controls, disrupting the proper management of keys, searching for ways to steal keys and credentials and more. For this reason, quantum computing (aided by generative AI) poses a future threat to current encryption algorithms, as the accelerated computing capabilities could, in theory, allow quantum tools to break the encryption within a reasonable amount of time (e.g., months or even moments, rather than centuries).
Since generative AI is likely to ramp up the capabilities of attackers and defenders, this, in turn, may lead to (or accelerate an already existing) arms race for more powerful AI to remain competitive and gain advantages.
Approaching Data Security Fundamentals Through the Lens of a Generative AI Future
As the evolution of AI produces both positive and negative impacts for cybersecurity in general, IT teams must start now to seek ways to reinforce traditional data security strategies (like encryption) to avoid unmanageable problems down the road. We are already seeing efforts to manage these challenges in government through public policy. While it’s unlikely that we will see new legislation in the next year, we are seeing progress in this area with the president’s executive order on safe, secure and trustworthy AI. The EO is helping to further define AI safety and security protocols by leveraging the power and resources of the executive branch departments, such as homeland security, defense, energy, commerce, etc. One powerful tool the executive branch has is its procurement policies – as the largest buyer of goods and services on the planet, the federal government can profoundly influence the market, including technology and safety standards.
Tech ecosystem players are also turning their attention to well-established, authoritative, independent sources, such as NIST, the Center for Internet Security, ISO and IEEE. As new generative AI-led security challenges arise, these organizations, which produce and maintain operational and technical best practices based on the input of a wide variety of industry, government and academic experts, will develop guidelines to address real-world scenarios. We will also see best practices updated in specific industry tools as vendors, manufacturers, system integrators, industry analysts and third-party testing services glean new insights through practical, real-world experience with generative AI.
As AI practices are refined worldwide, enterprises will begin to apply these new guidelines and improve their protection against cybersecurity risks, which includes taking steps to advance post-quantum computing algorithms to “future-proof” encryption. Enterprises will be able to identify where AI could be used to fortify weak areas, mitigate vulnerabilities and detect and respond to attacks in real-time.
AI can Play an Integral Role in Minimizing the Attack Surface
As we learn more about the interplay of AI with areas such as quantum computing and the credible threat this poses to traditional data security measures like encryption, limiting the data attack surface becomes a top priority.
Typically, the data attack surface is defined as any part of the organization, including websites, applications, email accounts and human interactions that could be an avenue of attack for malicious attackers. AI could be very impactful in reducing the data attack surface by enhancing the data management process, particularly in data discovery, during which it could find sensitive data that may be vulnerable or that may no longer be needed. Furthermore, AI can be effective during data classification by determining the nature of the data and how it should be classified per organizational policy, thus applying controls immediately. This is also the case during data sanitization, by ensuring that sensitive and/or ROT data is adequately erased (to standard), verified and recorded for inspection and audit purposes. Leveraging these approaches to minimize the attack surface will help reduce the information that bad actors could access presently and potentially be able to decrypt in the future.
Overall, the effect that AI has already had on business is visible in many areas. While only time will tell what the future holds for the cybersecurity landscape, it is worth taking a deeper look now at optimizing AI for improved efficacy and efficiency of present-day and future security controls and operations.