A new report from the Capgemini Research Institute has found that 97% of surveyed organisations experienced at least one security breach related to generative AI (Gen AI) in the past year. The study, “New defences, new threats: What AI and Gen AI bring to cybersecurity”, sheds light on the growing role of AI and Gen AI in cybersecurity while highlighting significant vulnerabilities that accompany their adoption.

Conducted in May 2024, the survey covered 1,000 organisations across 13 countries, spanning Asia Pacific, Europe, and North America. Participants represented industries including banking, insurance, energy, healthcare, and aerospace, with annual revenues exceeding $1bn.

“The use of AI and Gen AI has so far proved to be a double-edged sword,” said Capgemini’s cybersecurity, cloud infrastructure services global head Marco Pereira.

“While it introduces unprecedented risks, organisations are increasingly relying on AI for faster and more accurate detection of cyber incidents. AI and Gen AI provide security teams with powerful new tools to mitigate these incidents and transform their defence strategies.”

Over 90% of respondents reported experiencing at least one cybersecurity breach in the past year, a sharp increase compared to 51% in 2021. Nearly half of them estimated financial losses exceeding $50m over the last three years. The rise in breaches coincides with the adoption of generative AI technologies, which are being exploited by malicious actors.

The report identifies several risks associated with generative AI adoption, including data poisoning, sensitive information leaks, and misuse of deepfake technologies. Around 67% of organisations expressed concerns about these issues, while 43% reported direct financial losses from incidents involving deepfake content.

Other vulnerabilities include hallucinations, which are incorrect or misleading outputs from AI systems, biased content generation, and prompt injection attacks. These challenges are compounded by the improper use of Gen AI by employees, expanding the attack surface for organisations and increasing the complexity of defence strategies.

Despite the risks, organisations are leveraging AI and Gen AI to strengthen their cybersecurity frameworks. Over 60% of respondents noted that AI had reduced their time to detect threats by at least 5%, while 40% reported similar improvements in response times.

Three in five organisations described AI as essential for effective threat detection and response. The same proportion anticipates that Gen AI will further enhance long-term cybersecurity strategies by enabling faster identification and mitigation of sophisticated threats.

The growing threat landscape has prompted 59% of organisations to consider increasing their cybersecurity budgets. Investments are being directed towards advanced data management systems, cloud computing resources, and AI-specific risk mitigation strategies.

Capgemini’s report highlights three primary risk areas arising from generative AI adoption, which are increasingly sophisticated cyberattacks, expanded attack surfaces, and vulnerabilities throughout the lifecycle of custom Gen AI solutions. Addressing these challenges is critical to maintaining robust security postures.

Recommendations for managing Gen AI risks

The report outlines several measures organisations can adopt to balance the benefits of Gen AI with its associated risks. These include implementing advanced data management systems to safeguard sensitive information, developing robust governance policies to ensure ethical AI use, and providing employees with comprehensive training on AI-specific cybersecurity risks.

Additionally, Capgemini emphasises the importance of continuous reassessment of security frameworks to adapt to evolving threats. By adopting these strategies, organisations can harness the transformative potential of AI and generative AI while mitigating emerging vulnerabilities.

Recently, Google Cloud researchers cautioned that cyberattacks could escalate in 2025 as threat actors increasingly leverage AI-based tools to advance their malicious activities. In its Cybersecurity Forecast 2025 report, Google Cloud predicted that threat actors will use AI and large language models (LLMs) to craft more sophisticated phishing campaigns, SMS scams, and other social engineering tactics.

Read more: Google Cloud report warns of surge in AI-driven cyberattacks next year