View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Leadership
  2. Strategy
November 16, 2022updated 17 Nov 2022 8:45am

Ofcom mulls algorithmic transparency requirement for social media companies

Requiring social media companies to disclose the algorithms they use to select content would improve transparency, the regulator's CEO says.

By Ryan Morrison

Social media companies could be forced to share the code behind the algorithms used to determine what content a user sees when they login. The move is part of a wider review by Ofcom into how to tackle extremism and misinformation on sites like Twitter, TikTok and Facebook.

Algorithmic transparency
Dame Melanie Dawes, Ofcom CEO, says greater emphasis on algorithmic transparency could be required of social media companies in future.jh (Photo By Eóin Noonan/Sportsfile for Web Summit via Getty Images)

The regulator has been on a hiring frenzy to prepare for the introduction of the stalled Online Safety Bill that gives Ofcom the ability to punish tech companies that fail to properly protect users from harm, particularly younger users.

It is also in the process of reviewing the role social media and tech companies generally play in the “media plurality” regime, particularly around news coverage. The outcome could see social media platforms regulated in a similar way to news organisations such as the BBC and ITV.

Dame Melanie Dawes, CEO of Ofcom told the FT in an interview that they are seeing more evidence of news feeds driving divisions in society. The algorithms driving these divisions and selecting what users can see are put under little scrutiny and she wants to change that with greater algorithmic transparency.

“The more you consume your news from social media, the more likely you are to have more polarised views and find it harder to cope with other people’s views,” Dame Dawes said.

‘Ethical hazards’

These algorithms represent “ethical hazards” that require regulatory oversight, says Dr Catherine Menon, principal lecturer in computer science at the University of Hertfordshire.

“Bias in AI algorithms can be directly harmful or they can be indirectly harmful by misrepresenting the world,” she says. “In this case, it’s the latter: these algorithms are giving users a particularly restricted view of the numerous perspectives on certain highly emotional, controversial topics.

Content from our partners
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester
Infosecurity Europe 2024: Rethink the power of infosecurity

“Morally, these algorithms represent ethical hazards. Our actions and identities are shaped by what we see as prevailing social norms, and these algorithms are tailoring, customising and manipulating what their users perceive these norms to be.”

Dr Menon adds that there is “certainly an ethical responsibility to assess and mitigate these hazards – although whether the responsibility lies with Ofcom specifically is debatable!”

The idea of algorithmic transparency isn’t a new one. Last year the Central Digital and Data Office published guidelines to help public sector organisations provide clear information on the tools they use to support decisions.

The standards formed part of the National Data Strategy and involved collaboration with civil society groups and external experts. “Algorithmic transparency means being open about how algorithmic tools support decisions,” a CDDO website explains. “This includes providing information on algorithmic tools and algorithm-assisted decisions in a complete, open, understandable, easily accessible, and free format.”

Ofcom’s plan would see this same level of scrutiny applied to private sector organisations when the risk of harm is significant, including driving divisiveness among the user base.

Ethar Alali, founder of decentralised innovation engineering company Axelisys said there is a delicate balance needed between auditability and continuing innovation. “A major concern is whether Ofcom, a non-specialist regulator, will need it presented in a way humans will understand. That will be impossible for some algorithms.”

An over-emphasis on transparency “could end the use of deep learning algorithms in areas like drug discovery, cancer detection and law, and set back UK innovation two decades at least,” Alali argues. “Hence, the devil remains in the detail. On both sides of the debate.”

Before taking over Twitter in a $44bn deal that seems to have left the platform more divisive than ever, Elon Musk declared his intention to make the Twitter algorithm open source to drive transparency, but at the time experts told Tech Monitor it was “a token gesture” and could pose a real security risk for the platform.

Musk has yet to elaborate on how he might make the company's algorithms, which prioritise content that is seen by which users, available for greater scrutiny. It may be easier said than done.

"Moves to increase transparency by disclosing information about how such algorithms work are generally a good thing, but open-sourcing the actual code comes with its own risks," said Michael Rovatsos, professor of artificial intelligence and director of the Bayes Centre for data science and AI at Edinburgh University.

This is because the mechanisms involved in selecting which tweets are shown, which video appears next on a for you page or who gets to see what on a news feed are complex.

Improving algorithmic transparency

China offered a rare look at this code earlier this year when it published details of what goes into the algorithms used by Big Tech firms in that country. The regulator Cyberspace Administration of China already required algorithmic transparency from companies including Tencent and Alibaba.

There is a line of data on each of the algorithms. For example, for WeChat, the popular messaging and e-commerce platform owned by Tencent, the way the personalised push algorithm is “applied to information recommendation, and recommends graphic and video content that may be of interest to users through data such as user browsing records, following official accounts, and what users are watching” is described.

Providing an insight into how TikTok selects the videos on its notoriously opaque “For You page", ByteDance’s entry reveals that Douyin uses “user’s historical clicks, duration, likes, comments, relays, dislikes and other behavioural data” to generate the selection of videos and their order.

Whether this sort of approach will be taken by Ofcom is yet to be seen. It is also unclear whether the regulator will actually require the disclosure at all, as it is just one of many ideas to improve the quality of conversation on social media.

“We think the starting point is transparency,” Dawes told the FT. “So it may not be about setting rules but it may be about requiring more transparency or giving the regulator the ability to shine a light [on] how these feeds and algorithms work.”

Different types of algorithms on social media

Dr Menon said it wasn’t clear if this approach, of requiring companies to declare their algorithm, would actually work to solve the problem at hand. “One question may be that of transparency: with many AI systems it can be very hard for a layman - or even a regulator - to understand why the algorithm makes the decisions it does. The ability to provide this transparency is another ethical responsibility, this time on the tech company developing it.”

Alali told Tech Monitor the problem is that there are so many different types of algorithms and making the wrong move could have serious consequences. "While the transparency of algorithms appears essential in the fight against prejudice or the world of audit, this is a particularly complex challenge in modern algorithms."

This is because there are a lot of different types of algorithms, he explained, with some deterministic and others not. The deterministic algorithms can run multiple times and always provide the same result - such as a text-based system that can identify a car from different characteristics. These can be written in a way that looking at the code can easily reveal how decisions were made.

"Where this falls over, are in knowledge systems which accumulate information through a diverse source of signals which do not represent the thing you're trying to determine," he explains, adding that "it is much the same as trying to understand how memories and pictures are stored in the human brain. These artificial neural network systems key within machine learning and most of them have absolutely no representation whatsoever. In fact, then knowledge changes all the time.

"This means simply looking at the state of the algorithm, including its store of neural weightings, gives you absolutely no information about how it made its decision. You cannot follow it as a human. And this leaves citizens with an interesting dilemma. Because Ofcom will not be able to reconstruct the decision. It’s unrepeatable. Which begs the question, why are we doing it?"

Read more: Regulator reveals algorithmic secrets of China’s Big Tech companies

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.