Cloud providers lead the private sector when it comes the ethical oversight of artificial intelligence, according to AI governance ratings provider EthicsGrade, but fintech, insurance and HR SaaS providers sectors are falling behind. With the European Union’s AI Act poised to impose new governance requirements on companies operating ‘high-risk’ AI applications, EthicsGrade’s analysis suggests many are unprepared.
Ethical AI in the cloud
EthicsGrade scores companies or their sub-divisions out of 100 for their governance of AI, based on measures including the extent to which ethical considerations are incorporated into system design and stakeholder engagement. Out of more than 200 companies analysed by EthicsGrade in its latest analysis, only Microsoft achieved an A-grade. Four of the company's product divisions – cloud, hardware, Teams and Skype – scored 80 out of 100.
Collectively, cloud providers performed best in the analysis, with an average score of 65.50. Microsoft's cloud division received a top score of 80, while Google Cloud scored particularly high due to its transparency policies on its AI principles and more importantly, putting those principles into practice by conducting “deep ethical analyses and risk and opportunity assessments”.
Cloud providers' high scores reflect the pressure they are under from buyers to handle their data responsibly. “What’s really interesting about the cloud sector is that virtually every other company we’ve covered is reliant on its tools and analytics capabilities,” says EthicsGrade founder Charles Radclyffe. “Those companies will be recognising that there has to be a consistency between ESG and digital responsibility, and will be looking at cloud providers to assess whether a partnership would support them, or if any controversies would affect them as a result.”
Not all cloud providers are equally effective at AI governance, however: the HR SaaS segment received an average score of just 50.1. While well-known providers SAP and Salesforce were graded highly, lesser-known Spanish vendor Meta4 received an individual score of just 23.7. These findings are particularly concerning, given the potential impact of these platforms on their customers' employees. “A mistake in an algorithm at the platform level will have a significant ripple effect and there’s a lot of risk in that”, says Radclyffe.
Another area of concern is the fintech sector. UK fintech Revolut scored just 39 out of 100, while Diem, the now-defunct digital currency backed by Meta, scored 35.
Traditional retail banks fared better. Radclyffe puts this discrepancy down to the fundamental difference in governance and ownership structures between the two. “The pressures on the operations of fintechs are driven by investors seeking greater valuations, as opposed to retail banks where shareholders want you to mitigate risks at all costs and deliver improvements of efficiencies,” Radclyffe explains.
No such excuse is available for the general insurance providers included in EthicsGrade's analysis. Although SwissRe was among the top ten highest rated companies, the industry as a whole scored as poorly as the fintechs. This is especially alarming given the potential for discrimination in AI-powered insurance.
The sector with the lowest score, by some distance, is the dating app industry. This reflects a general "pushback" against third-party assessments of its AI policies, Radclyffe says.
AI regulation on the horizon
The EU's forthcoming AI Act aims to limit the dangers of AI by categorising applications on a scale from low to high risk. Under the current proposal, companies handling high-risk applications will be required to introduce governance measures including continuous risk assessment and providing clear and adequate information to the user, many of which are included in EthicsGrade's assessment.
While there is an ongoing debate as to what exactly constitutes a high-risk application, cloud providers and the financial services sector could both fall under this category due to the sensitivity of the data that they handle. EthicsGrade's ratings suggest many are ill-prepared for the legislation.
Technical standards could help. While traditionally used to address narrowly technical considerations, policymakers are increasingly applying the standards-setting process in service of social, ethical, economic and political aims. Article 20 of the draft AI Act establishes a framework for the development of AI standards designed to limit harms to citizens. (Last month, the UK launched its own initiative to shape global AI standards).
Emily Taylor, CEO of Oxford Information Labs, believes that technical standards will be key to the AI Act's success. Implementing international standards is complex, she says, but they offer a commercial motivation for companies to adopt them. “Once you comply with standards, then you get market access and your regulatory and compliance burden is going to be cut right down,” she says.
Taylor is also concerned that the EU AI regulation will draw heavily from the enforcement mechanisms of General Data Protection Regulation (GDPR), with severe financial liabilities coupled with a formalistic “box-ticking exercise” that does not do much to change behaviours.
Despite that, she believes that the regulatory landscape for emerging technologies is a lot more mature than it was 20 years ago. “Even technology companies are recognising that regulation is necessary to some degree," she says. "But everyone wants good regulation, not bad regulation.”
For Radclyffe, growing scrutiny over AI from governments and the public will result in a “gear shift” in how the private sector navigates its AI governance policies. But his optimism is only partial. “Are we likely to see regulation coming in force faster than it did in the past? Definitely. But will it be fast enough? Probably not.”