North American Developers Optimistic About Generative AI and Code Security
Developers in North America are more likely than their counterparts in other regions to see generative AI (GenAI) as a tool to improve the security of the code they write, according to a report by market research firm Evans Data Corp.
The company’s most recent Global Development Survey found that 37.6% of programmers in North America said they expect GenAI to improve code security, a higher percentage than 31.3% of developers in South America. In Europe, the Middle East and Africa, that figure came in a 30.7%; for the Asia-Pacific region, it was 30.1%
The rapid innovation and adoption of generative AI among enterprises, smaller companies, and consumers has come with almost as many worries about security and privacy as it has with the promise of vast benefits in everything from how businesses run to how people interact with the various devices in their homes.
The story for developers is no different. “Developers have relied on machine intelligence for years: automation, code completion, low-code development, static code analysis, and the like,” cybersecurity company Trend Micro wrote. “But generative AI marks a distinct and major leap forward.”
There are myriad generative AI products aimed at developers, from OpenAI’s ChatGPT chatbot – which helped kick off the land rush that is generative AI when it was released in November 2022 – and GitHub Copilot to Google’s PaLM 2, Cohere Generate, and Anthropic’s Claude, all of which can be used to help generate code. Each of them promises to make a developer’s life better in oh-so-many ways.
GenAI can improve the efficiency and productivity of software development workflows by automating coding tasks and providing real-time code suggestions, accelerating the time to market for products and saving money. It allows for natural language interfaces in development tools, improves code by identifying redundancy or inefficiency and enhances documentation, according to IBM.
Global consultancy McKinsey and Company found in a study that developers can complete coding tasks twice as fast with genAI.
Code Security Benefits
Such benefits can also include improving security in the development process.
“Generative AI can enhance code security by analyzing vast datasets to identify vulnerabilities and suggest patches,” Evans Data said in a statement. “It learns from historical security breaches to predict potential threats, automatically generates secure coding patterns, and provides real-time feedback to developers, significantly reducing the risk of security flaws in software applications.”
In addition, developers can use the technology to automatically create test cases, which means they can identify potential issues earlier in the development process, according to IBM. Plus, IBM claimed, “By analyzing large codebases, generative AI can assist software development teams in identifying and even automatically fixing bugs. This can lead to more robust and reliable software, as well as faster development cycles.”
NVIDIA, whose GPUs are cornerstones in many of the large language models (LLMs) that underpin generative AI tools, notes that the technology can create synthetic data to simulate previously unseen attack patterns, run safe simulated attacks to test defenses, and analyze vulnerabilities, a time-consuming task when done only by developers. “An LLM focused on vulnerability analysis can help prioritize which patches a company should implement first,” the company wrote. “It’s a particularly powerful security assistant because it reads all the software libraries a company uses as well as its policies on the features and APIs it supports.”
And Yet, Dangers
All that said, there are dangers and risks lurking in generative AI that developers need to keep in mind. The open-source OWASP organization in a report detail 10 types of vulnerabilities for AI apps built with LLMs, ranging from data leakage and prompt injections to insecure plug-ins, supply-chain risks, and misinformation or “hallucinations.”
Jacob Schmitt, senior technical content marketing manager for continuous integration and continuous development (CI/CD) platform maker CircleCI, noted the benefits that come with using generative “in the right way,” “the technology poses inherent risks that demand careful consideration and ongoing vigilance against the potential for introducing errors, security vulnerabilities, compliance issues, and ethical concerns into your code base.”
Schmitt noted a few examples: poor or inefficient code quality that doesn’t meet a company’s standards, for one. A lack of visibility into the generative AI-generated code, for another — where even the code works, it might be difficult to understand its logic. Given that many LLMs are trained on both public and proprietary code, software created with generative AI may violate copyright laws or leak proprietary or sensitive information, violating regulations around data privacy and security.
In addition, AI models trained on large code repositories could including exploitable patterns or known vulnerabilities that could inadvertently find their way into a developer’s work. Schmitt also warned about increasing an organization’s technical debt, “the cumulative consequences of suboptimal design choices, shortcuts, or compromises made during development.”
“Accumulated technical debt can lead to decreased code maintainability, increased development time for future enhancements or bug fixes, and higher costs in the long run,” Schmitt wrote. “Crucially, the extent of the technical debt you are likely to accrue depends on how you deploy and integrate generative AI into your development workflow.”