View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Policy
July 15, 2022

Meta still has questions to answer about its responsible innovation and AI plans

Facebook's parent company details its responsible innovation strategy in a new report. Experts say it still has work to do.

By Matthew Gooding

Facebook parent company Meta released its first human rights report today, detailing the due diligence and actions it has undertaken in response to its products’ roles in a wide range of controversial situations around the globe. Among the many topics covered are the company’s approach to responsible innovation and AI use, and while experts say Meta’s policies in this area contain positive elements, the company still has much work to do.

Meta responsible innovation
Meta has released its first human rights report detailing its responsible innovation and AI strategy. (Photo Fritz Jorgensen/iStock)

Covering 2020 and 2021, the report “details how we’re addressing potential human rights concerns stemming from our products, policies or business practices,” a Meta blog post says. Authors Miranda Sissons, Meta director of human rights and Iain Levine, the company’s product policy manager for human rights, add: “We hope this report is read in the spirit we wrote it: as a genuine effort to tell the story of our evolving human rights risk management, disclose actions and insights with humility, and develop a strong practice from which to do more.”

How Meta promotes responsible innovation and AI use

Responsible innovation is the notion of developing products and services which take into account the wider societal impact of these products. Facebook has been widely criticised for failing to tackle hate speech, with regulators and civil rights groups claiming the social network has helped the spread of harmful content and disinformation in the US as well as places like Myanmar, where the platform has been used to encourage violence against minorities.

The report points to the company’s Responsible Innovation Dimensions framework as evidence that it is trying to limit these kinds of unintended consequences in new products and updates. It says the framework currently includes ten dimensions: autonomy, civic engagement, constructive discourse, economic security, environmental sustainability, fairness and inclusion, privacy and data protection, safety, voice, and well-being.

“These dimensions in turn guide our analysis and practice,” Meta says, adding that it will develop over time to include new elements.

Meta is also actively developing new AI algorithms and platforms. In the report the company says it has “developed a dedicated, cross-disciplinary Responsible AI (RAI) team within its AI organisation to help ensure that AI governance is based on foundational values of respect for human rights, democracy, and the rule of law”, reducing the likelihood of AI bias. It points out that it has been working with the EU to develop new principles for AI development.

Does Meta take responsible innovation seriously?

While these are positive steps for Meta, Charles Radclyffe, founder and CEO of EthicsGrade, an ESG data analytics company, says it still has work to do. He cites the company’s oversight board, which looks at content moderation decisions on its platforms and how its algorithms promote different pieces of information, as a sensible initiative, but says it doesn’t go far enough.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

“If they could widen the scope of their oversight board to cover all aspects of their operations and corporate governance, then they would not just perform very favourably according to our assessment of best practice, but also, I’m sure, minimise or mitigate most controversies,” Radclyffe says.

He says the company should commit to new technology developments being independently vetted to assess their impact on human rights. “It’s good to see that they are publishing the details of various human rights assessments that they have been cornered into commissioning,” he adds. “[But] it is already best practice, as articulated by the European Commission’s High Level Expert Group on AI, for fundamental rights impact assessments to be carried out on all new product initiatives.” Based on the content of the report, Meta “nearly achieves this”, he says, but their assessments are not independent.

And while the report addresses Meta’s internal moves to develop products responsibly, Dr Stephen Hughes lecturer in responsible innovation at UCL, says it does little to consider the company’s wider impact on society. “There are lots of good policies for what Meta will do to take responsibility for what takes place within the company, but it is not clear how they are taking responsibility for the – past and future – impacts of their technologies,” Dr Hughes says. “We might think about impacts on elections, distribution of harmful images, bullying and harassment, its carbon output, and obscurity around how data is collected, tracked, sold, or used.”

He adds that Meta’s declared aim to be the “metaverse company”, with its technology underpinning interactions in fast-growing virtual worlds, is potentially problematic from a responsible innovation perspective. “Meta’s mission to ‘give people the power to build community and bring the world closer together’ places it in quite a powerful position,” he says. “Meta is not just talking about bringing the world together in the abstract, but bringing it together through Meta’s profit-making technology. In terms of responsibility, this could be seen as a conflict of interest.”

He adds: “We might also ask about the scope of responsible innovation. Does it include Meta simply not existing? How much of its work is it willing to accept as being irresponsible? What if the view of the public was that it has had enough digital connection? Is Meta willing conclude that its entire enterprise is an example of irresponsibility?”

Read more: Meta’s future in enterprise could flounder on trust issues

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU