Organisations that embed trust-building actions in their generative AI (Gen AI) strategies are more likely to achieve higher levels of benefit, argues new research from Deloitte. According to the firm’s second-quarter ‘2024 State of Generative AI in the Enterprise’ survey, conscious steps to build trust between the operators and users of such products lead to more successful deployments. Based on responses from nearly 2,000 business and technology leaders, Deloitte also cited improved risk mitigation arising as another benefit of such trust-building exercises.
The survey revealed that respondents who took extensive trust-building measures across data, workforce, and customer-related processes were more likely to report achieving two-thirds (66%) or more of their anticipated benefits from generative AI. These actions included embedding transparency into how AI-generated outputs impact workforce strategies and watermarking synthetic data to distinguish AI-created content.
Conversely, organisations prioritising risk management actions such as creating AI review boards and inventorying implementations saw improved risk outcomes but lower levels of benefit realisation. Deloitte’s analysis highlights the need for leaders to distinguish between trust-building and risk-management efforts, as the two approaches yield different results in generative AI adoption.
The survey showed that organisations with a trust-first approach not only achieved higher benefits but also reported more consistent outcomes across key performance areas. High-trust respondents were 18% more likely than average to improve efficiency, foster innovation, enhance customer relationships, and increase revenue. By comparison, those focusing solely on risk management were 34% less likely to achieve top-tier benefit levels than trust-building organisations.
Deloitte found that trust-building actions often improve risk outcomes alongside benefits. Respondents who prioritised transparency, reliability, and humanity in their AI adoption strategies were 15% more likely to manage risks effectively while maintaining high performance levels. For example, companies that shared insights into their AI models’ data sources and training methodologies were better equipped to address concerns about bias and algorithmic hallucinations.
However, risk-centric organisations tended to focus narrowly on controls and processes, such as forming AI governance committees or reviewing vendor policies. While these measures are critical to ensuring compliance and safety, Deloitte concluded that they do not necessarily drive the broader organisational buy-in and confidence needed for sustained innovation.
Balancing trust and risk
Deloitte argued that organisations must balance trust and risk management to maximise the value of their generative AI investments. A trust-first strategy, explained the consultancy, incorporates risk mitigation but embeds AI governance into a transparent and empathetic organisational culture. This approach, it said, better positions organisations to navigate uncertainties surrounding generative AI.
For instance, leaders are encouraged to develop clear generative AI strategies that align with organisational values and regulatory frameworks. Transparent communication about AI applications and their potential risks can foster greater confidence among employees and customers. Moreover, operationalising trust through robust governance processes and enabling technologies, such as AI monitoring tools, can simplify the management of complex AI systems.
Deloitte emphasises that organisations must also consider the dynamic feedback loops between trust and risk. While risk management actions enhance preparedness, they are most effective when combined with trust-building measures that drive stakeholder engagement and confidence.