Artificial intelligence is becoming ubiquitous in businesses across all sectors, with more than 95% of enterprises saying it is important to their digital transformation efforts in a recent global survey by 451 Research. But this enthusiasm has been matched by fears about the social impact of AI and calls for robust governance. New analysis by Tech Monitor reveals a gulf between industries in the maturity of their AI governance practices, with pharmaceutical companies leading the way and energy companies and, worryingly, tech start-ups falling behind.

Vials of AstraZeneca and Pfizer BioNTech Covid-19 vaccines: the pharma sector is a leading driver of AI governance. (Photo by Marc Bruxelle/Shutterstock)

UK start-up EthicsGrade provides ESG ratings of companies based on their AI governance credentials. The company has identified a number of important practices and policies, such as the extent to which ethical considerations are incorporated into system design and stakeholder engagement, and scores companies using publicly available information.

Pharmaceutical companies lead on AI governance

Tech Monitor analysis of EthicsGrade’s AI governance ratings reveals that there is a stark divide between sectors. While companies in biotechnology lead the pack with an average score of 68.4, giving them an average C-grade rating, the energy sector is the worst performer, scoring an average R-grade rating of only 42 out of 100. Only 4.4% of the companies rated by EthicsGrade received the score of over 80 required to achieve an A-grade classification.

 

The biotech sector is bolstered by the performance of pharmaceutical companies, which make up seven of the top ten companies in EthicsGrade’s sample of more than 200 countries. US pharma giant Merck ranks first, performing strongly across most aspects of AI governance, including data privacy and ethical risk.

Pharma’s strong AI governance is a by-product of its well-established business and medical ethics processes, which feed through into AI adoption, says Charles Radclyffe, CEO of EthicsGrade and former head of AI at Fidelity International.

“When a pharmaceutical organisation translates science into data science, they take a level of professional rigour through that process,” he says. “Bribery and corruption have been, historically, a major issue in that industry and so they have mechanisms for whistle-blowers and opportunities for people to raise concerns.”

Other regulated industries like finance also perform better than average due to their high level of regulatory scrutiny, while sectors that have traditionally faced a lower compliance burden, such as technology, perform more poorly and demonstrate a greater variance in scores.

The existence of established and robust governance structures also explains why incumbent companies tend to outperform challengers. While new companies in sectors such as automotive – for example, Tesla – are at the frontier of innovation, they score considerably lower for AI governance than traditional automotive companies, which are more risk-averse in their adoption of new technologies. This can similarly be seen in finance, where fintech start-ups average a score of 55.5 compared to 58.7 for retail banks.

“While incumbents tend to be slower to innovate around emerging technology such as AI, they do appear to have a more mature relationship to risk,” says Radclyffe. “Therefore, governance is stronger, closer to the top and better connected to the operations.”

Chinese companies are among some of the weakest performers in EthicsGrade’s ratings. This is not necessarily a reflection of poor governance but a lack of transparency, with Western companies more willing than their Chinese peers to talk about technologies and processes that are “half-baked”, says Radclyffe.

While the EthicsGrade model may penalise Chinese companies as a result of their differing culture around disclosure, transparency is an integral part of good AI governance, says Radclyffe. “You need to be able to do two things with governance: one is to demonstrate that you have controls in place; secondly, make sure you have the ability for stakeholders – employees, members of the public, consumers, etc. – to raise concerns,” he says. “If you’re not willing to talk about your governance, your governance is fairly meaningless.”

The AI governance divide between SMEs and corporates

While there is increasing awareness of the importance of AI ethics, the approach at enterprise level still tends to be high level, says Nigel Crook, professor of AI and robotics at Oxford Brookes University.

“[Companies] will talk about things like transparency of AI and eliminating bias without really understanding how difficult both of those things can be,” he says. “Having a policy that says you will do it is one thing. But having the competence to understand how to develop systems and deploy them within your organisation, so that they are truly transparent and unfair bias is minimised, is actually really hard.”

This is particularly true for SMEs, which often lack the requisite funding and technical expertise to build out their AI governance capabilities. “[There is a] digital divide between the corporates who have the resources and the ability to develop and try out AI-based products on a large scale and small to medium-sized companies who just simply can’t,” says Crook.

There are resolute steps that companies can take to improve their AI governance. It begins with having a workforce that can correctly assess the level of certainty and accountability to attribute to AI systems, says Selin Nugent, assistant director at The Institute for Ethical AI. It is also important that companies take ownership of “uncomfortable truths” about their past decision-making that may introduce biases.

“Their past history of performance which feeds into these AI systems may not necessarily match up to their goals and may actually hold them back rather than helping them propel forward,” she says. “It’s [about] coming to terms with what past mistakes that we’ve made [and asking] how do we improve on this.”

While the “fairly glacial pace of change” in AI ethics can be frustrating, progress is being made and establishing governance standards does not happen overnight, says EthicsGrade’s Radclyffe.

“To put it into context, it was only really in the 1970s when accounting standards started to become standardised and international,” he says. “Even if it does take this decade and longer to resolve [issues around AI governance], that’s not necessarily a disastrous outcome.”