
More than half of enterprise employees using generative AI (GenAI) assistants at work admit to inputting sensitive company data into publicly available tools, according to a new survey by TELUS Digital Experience. The report, based on responses from 1,000 US-based employees working at companies with at least 5,000 staff members, found that 57% had entered confidential information into AI platforms such as ChatGPT, Google Gemini, and Microsoft Copilot.
The survey, conducted in January 2025 via Pollfish, highlights widespread use of personal AI accounts for work-related tasks. 68% of employees reported accessing GenAI assistants through personal accounts rather than company-approved platforms. The findings suggest a growing trend of ‘shadow AI’, where AI adoption occurs outside of IT and security oversight, increasing risks related to data exposure and compliance violations.
Lack of training and policy enforcement intensifies security concerns
Employees acknowledged inputting a variety of sensitive data into public GenAI tools. 31% reported entering personal details, such as names, addresses, emails, and phone numbers. 29% disclosed project-specific information, including unreleased product details and prototypes. 21% acknowledged inputting customer-related data, including contact details, order histories, chat logs, and recorded communications. 11% admitted to entering financial information, such as revenue figures, profit margins, budgets, and forecasts.
Despite corporate policies restricting the use of GenAI for sensitive information, 29% of respondents confirmed that their organisations had clear AI guidelines in place. However, policy enforcement remains inconsistent. Only 24% of employees stated they had received mandatory AI training, while 44% said they were unsure whether their company had specific AI policies. 50% did not know if they were adhering to AI-related policies, and 42% indicated there were no consequences for failing to follow company AI guidelines.
The survey results also show that GenAI tools are widely relied upon for workplace productivity. 60% of employees stated that AI assistants help them work faster, while 57% said AI tools improve efficiency. 49% reported that AI enhances their work performance, and 84% expressed interest in continuing to use AI at work. Among those who support AI integration, 51% cited its role in supporting creative tasks, while 50% said it helps automate repetitive processes.
“Generative AI is proving to be a productivity superpower for hundreds of business tasks,” said TELUS Digital Fuel iX general manager Bret Kinsella. “Employees know this. If their company doesn’t provide AI tools, they’ll bring their own, which is problematic. Organisations are blind to the risks of shadow AI, even while they are secretly benefitting from productivity gains. However, providing AI tools is not enough to mitigate these risks. Employees will supplement company-provided AI with more advanced tools that are publicly available.”
As AI adoption grows in enterprise settings, the findings highlight the need for secure, company-approved AI solutions that align with data protection, regulatory compliance, and IT governance. Security experts caution that unregulated AI usage increases risks related to data sovereignty, intellectual property protection, and compliance obligations. Organisations are being urged to implement structured AI policies, provide employee training programmes, and develop secure AI platforms to mitigate potential security gaps.
Kinsella added that the survey found that 22% of employees with access to a company-provided GenAI assistant still use personal AI accounts. He noted that effectively harnessing AI’s potential while addressing security risks requires enterprise GenAI solutions that not only include robust security and compliance but can also be easily updated with the latest AI advancements.