Up to 83% of organisations are using artificial intelligence (AI) to generate code, according to a study by Venafi. The cybersecurity firm’s latest survey of cybersecurity executives also found that 92% of respondents expressed concerns about the security implications implicit in such widespread reliance on AI by dev-ops teams. The survey, which gathered insights from 800 such figures across the US, the UK, Germany, and France, highlights the growing gap between fast-paced AI-driven development and the ability to secure this new technology effectively.

Despite the mounting risks, 72% of security professionals report feeling they have no choice but to allow developers to use AI to remain competitive. 63%, meanwhile, have considered banning AI-generated code altogether due to security concerns.

AI becoming commonplace in code development

One of the most pressing challenges highlighted in the report is the difficulty security teams face in keeping up with the speed of AI-powered development. 66% of respondents admitted that it is almost impossible for security teams to manage AI-driven code at the pace it is being deployed, leading to fears of an impending “security reckoning.” 78% of security leaders, meanwhile, foresee serious security challenges as AI adoption continues to surge.

“New threats – such as AI poisoning and model escape – have started to emerge while massive waves of generative AI code are being used by developers and novices in ways still to be understood,” said Venafi’s chief innovation officer Kevin Bocek.

The report also reveals a heavy reliance on open-source code, with security leaders estimating that 61% of their applications incorporate open-source components. While 90% of security leaders trust these libraries, 86% believe open-source prioritises speed over security best practices. This creates a serious challenge, as 75% of respondents admitted that verifying the security of every line of open-source code is nearly impossible.

“Companies can’t blindly trust open-source solutions,” said Venafi’s technical director, Steve Judd. “They really have very little idea who has created or contributed towards them.”

Governance gaps

Venafi’s research also points to a significant gap in governance, with 47% of companies lacking policies to ensure the safe use of AI within their development environments. Additionally, 63% of security leaders feel it is nearly impossible to govern the use of AI in their organisations, citing a lack of visibility into how AI is being used.

For its part, Venafi advocates for code signing as a crucial defence mechanism against the risks posed by AI and open-source development. 92% of security leaders agree that code signing is essential for establishing trust in open-source code, as it verifies the authenticity and integrity of the code. This approach ensures that no unauthorised code is executed, protecting organisations from potential breaches.

It’s worth noting that much of coding has been automated for years, long before AI. Tools like code autocompletion and low-code platforms have handled routine tasks such as bug detection, boilerplate code generation, and language translation. Integrated development environments (IDEs) have also long provided autocompletion to speed up coding and reduce manual effort.

These automated systems traditionally focused on low-level tasks like debugging and formatting. While AI tools now amplify this by leveraging large language models (LLMs) for complex code snippets, earlier automation mainly aimed to streamline specific stages of development.

Read more: Meet the CIOs that regret investing in AI