
AI-related data breaches linked to cross-border misuse of generative AI (GenAI) are expected to exceed 40% by 2027, according to a new forecast from Gartner. The rapid adoption of GenAI technologies has advanced faster than governance frameworks, argues the research firm, creating regulatory and security concerns, particularly regarding data localisation requirements. As enterprises rely on centralised computing power to support AI-driven operations, the risks associated with cross-border data transfers are increasing, said the US-based technological research and consulting firm.
“Unintended cross-border data transfers often occur due to insufficient oversight, particularly when GenAI is integrated in existing products without clear descriptions or announcement,” said Gartner’s VP analyst Joerg Fritsch. “Organisations are noticing changes in the content produced by employees using GenAI tools. While these tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and APIs hosted in unknown locations.”
Gartner highlights the absence of global AI governance standards as a key factor contributing to security vulnerabilities and compliance challenges. Enterprises operating across multiple jurisdictions must develop region-specific AI strategies to meet varying regulations, leading to increased operational complexity and limiting AI scalability. Market fragmentation caused by differing regulatory requirements is expected to slow innovation and affect the broader adoption of AI-powered solutions.
By 2027, AI governance is predicted to become a mandated component of sovereign AI laws worldwide. Gartner advises enterprises to strengthen their governance frameworks ahead of regulatory enforcement to mitigate the risks associated with AI-driven data breaches. Establishing oversight mechanisms to ensure compliance with AI laws across different regions is expected to become essential for organisations deploying GenAI technologies.
Strengthening AI data governance and security
To address the risks posed by cross-border AI misuse, Gartner recommends extending data governance policies to include AI-specific risk assessments. Enterprises are encouraged to implement stricter data lineage tracking and cross-border transfer impact assessments to align with evolving regulations.
Security measures such as encryption, anonymisation, and the use of Trusted Execution Environments are also advised to protect AI-generated data. Techniques such as Differential Privacy can further enhance data security when information is transferred across regions.
Additionally, organisations are expected to invest in trust, risk, and security management (TRiSM) solutions designed for AI technologies. These solutions encompass AI governance frameworks, prompt filtering, redaction tools, and synthetic data generation. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will significantly reduce exposure to inaccurate or unverified information, improving AI reliability in decision-making processes.
The need for stronger AI data governance measures is reinforced by recent studies highlighting the financial and operational impact of data breaches. In 2024, the global average cost of a data breach, as per findings from IBM, rose to $4.88m, reflecting a 10% increase from the previous year. IBM also found that organisations implementing security AI and automation reported cost savings of $2.22m compared to those without such measures.
The growing concern over AI-driven security threats is also evident in regional studies. A Cloudflare survey in late 2024 focusing on the Asia-Pacific region found that 41% of organisations experienced a data breach within a 12-month period, with nearly half reporting more than ten incidents. The study identified Construction and Real Estate (56%), Travel and Tourism (51%), and Financial Services (51%) as the most affected industries. Additionally, 87% of cybersecurity leaders expressed concerns that AI is increasing the sophistication and severity of data breaches, further underscoring the need for enhanced security frameworks to address evolving threats.