Facebook parent company Meta released its first human rights report today, detailing the due diligence and actions it has undertaken in response to its products’ roles in a wide range of controversial situations around the globe. Among the many topics covered are the company’s approach to responsible innovation and AI use, and while experts say Meta’s policies in this area contain positive elements, the company still has much work to do.

Meta responsible innovation
Meta has released its first human rights report detailing its responsible innovation and AI strategy. (Photo Fritz Jorgensen/iStock)

Covering 2020 and 2021, the report “details how we’re addressing potential human rights concerns stemming from our products, policies or business practices,” a Meta blog post says. Authors Miranda Sissons, Meta director of human rights and Iain Levine, the company’s product policy manager for human rights, add: “We hope this report is read in the spirit we wrote it: as a genuine effort to tell the story of our evolving human rights risk management, disclose actions and insights with humility, and develop a strong practice from which to do more.”

How Meta promotes responsible innovation and AI use

Responsible innovation is the notion of developing products and services which take into account the wider societal impact of these products. Facebook has been widely criticised for failing to tackle hate speech, with regulators and civil rights groups claiming the social network has helped the spread of harmful content and disinformation in the US as well as places like Myanmar, where the platform has been used to encourage violence against minorities.

The report points to the company’s Responsible Innovation Dimensions framework as evidence that it is trying to limit these kinds of unintended consequences in new products and updates. It says the framework currently includes ten dimensions: autonomy, civic engagement, constructive discourse, economic security, environmental sustainability, fairness and inclusion, privacy and data protection, safety, voice, and well-being.

“These dimensions in turn guide our analysis and practice,” Meta says, adding that it will develop over time to include new elements.

Meta is also actively developing new AI algorithms and platforms. In the report the company says it has “developed a dedicated, cross-disciplinary Responsible AI (RAI) team within its AI organisation to help ensure that AI governance is based on foundational values of respect for human rights, democracy, and the rule of law”, reducing the likelihood of AI bias. It points out that it has been working with the EU to develop new principles for AI development.

Does Meta take responsible innovation seriously?

While these are positive steps for Meta, Charles Radclyffe, founder and CEO of EthicsGrade, an ESG data analytics company, says it still has work to do. He cites the company’s oversight board, which looks at content moderation decisions on its platforms and how its algorithms promote different pieces of information, as a sensible initiative, but says it doesn’t go far enough.

“If they could widen the scope of their oversight board to cover all aspects of their operations and corporate governance, then they would not just perform very favourably according to our assessment of best practice, but also, I’m sure, minimise or mitigate most controversies,” Radclyffe says.

He says the company should commit to new technology developments being independently vetted to assess their impact on human rights. “It’s good to see that they are publishing the details of various human rights assessments that they have been cornered into commissioning,” he adds. “[But] it is already best practice, as articulated by the European Commission’s High Level Expert Group on AI, for fundamental rights impact assessments to be carried out on all new product initiatives.” Based on the content of the report, Meta “nearly achieves this”, he says, but their assessments are not independent.

And while the report addresses Meta’s internal moves to develop products responsibly, Dr Stephen Hughes lecturer in responsible innovation at UCL, says it does little to consider the company’s wider impact on society. “There are lots of good policies for what Meta will do to take responsibility for what takes place within the company, but it is not clear how they are taking responsibility for the – past and future – impacts of their technologies,” Dr Hughes says. “We might think about impacts on elections, distribution of harmful images, bullying and harassment, its carbon output, and obscurity around how data is collected, tracked, sold, or used.”

He adds that Meta’s declared aim to be the “metaverse company”, with its technology underpinning interactions in fast-growing virtual worlds, is potentially problematic from a responsible innovation perspective. “Meta’s mission to ‘give people the power to build community and bring the world closer together’ places it in quite a powerful position,” he says. “Meta is not just talking about bringing the world together in the abstract, but bringing it together through Meta’s profit-making technology. In terms of responsibility, this could be seen as a conflict of interest.”

He adds: “We might also ask about the scope of responsible innovation. Does it include Meta simply not existing? How much of its work is it willing to accept as being irresponsible? What if the view of the public was that it has had enough digital connection? Is Meta willing conclude that its entire enterprise is an example of irresponsibility?”

Read more: Meta’s future in enterprise could flounder on trust issues