Enterprises are moving ahead with the adoption of generative AI (GenAI), even as cybersecurity risks continue to rise. According to Fortanix’s 2025 State of Data Security in GenAI Report, which surveyed 1,000 executives, 87% of security leaders reported a data breach in the past year. At the same time, 97% of companies plan to integrate GenAI into their operations, either by purchasing existing solutions or developing in-house systems, to streamline processes and drive revenue growth. As AI adoption expands, organisations must address new security challenges related to protecting sensitive data across multiple platforms and environments.

“The data clearly shows that nothing is going to stand in the way of organisations moving forward with GenAI deployment this year despite many organisations not fully grasping the complex data security issues surrounding the technology,” said Fortanix chief product officer Anuj Jaiswal.

The report highlights that 97% of enterprises restrict GenAI usage, and 89% of executives believe these controls are effective. Despite these policies, 95% of professionals continue using AI tools, indicating a gap between policy and actual usage. Among them, 66% use GenAI for work-related tasks, while 64% access AI tools through personal email accounts, bypassing corporate security controls.

This trend raises concerns about unregulated data exposure, as sensitive business information could be accessed or shared outside of secured environments. The report notes that many organisations lack oversight on how employees interact with GenAI, increasing the risk of unauthorised access and compliance violations. More than half of IT executives (53%) expressed concerns over shadow AI usage, where employees access unauthorised GenAI tools without IT department approval.

Additionally, 41% of security executives reported that their organisations had detected unapproved AI applications being used within their networks, further complicating efforts to maintain data integrity and security compliance.

Encryption strategies lag behind AI expansion

While 88% of companies have already allocated budgets for GenAI deployment, security remains a secondary concern in some cases. Although line of business (LOB), IT, and security executives rank AI model accuracy among their top concerns, only IT executives prioritise data security and privacy as critical risk factors.

Encryption remains a key safeguard against unauthorised data access, yet Fortanix’s report suggests that many security tools are outdated and insufficient for AI-driven environments. 62% of security leaders indicated that their current encryption strategies are not optimised for protecting AI-generated data. Additionally, 58% of organisations struggle to enforce consistent encryption policies across cloud, on-premises, and hybrid environments.

As AI systems generate and process increasing volumes of proprietary and customer data, ensuring end-to-end encryption across all touchpoints remains a growing challenge. The report emphasises that standard encryption models designed for structured data may not be sufficient for the unstructured and evolving datasets associated with GenAI.

The report highlights that 74% of executives feel pressure to implement GenAI, driven by board directives, competitive market demands, and leadership expectations. The urgency is highest among 82% of LOB executives and 81% of IT executives, while 56% of security executives remain more cautious due to potential cybersecurity threats.

Among those investing in GenAI, 47% cite competitive advantages as their primary reason for adoption, while 39% see AI-driven efficiencies as the most significant driver. However, only 21% of security executives believe that their organisations are adequately prepared to address AI-specific security risks.

Read more: 65% of employees bypass cybersecurity policies, driven by hybrid work and flexible access