View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Policy
July 9, 2020

Will the UK’s approach to ‘online harms’ get to the root of the problem?

Calls for action on disinformation are mounting, but some experts fear new policies are misdirected, writes Laurie Clarke.

By Laurie Clarke

Blaming tech giants such as Facebook and Twitter for hate speech and disinformation fails to take into account other economic, social and political factors. Credit: Shutterstock

Calls for action on disinformation are mounting, but some experts fear new policies are misdirected, writes Laurie Clarke.

*(Misinformation is factually incorrect information that is shared without the intention to mislead; disinformation is incorrect information shared with the intention to mislead. For brevity, this piece will use the term ‘disinformation’ to cover both.)

In recent weeks, calls for social media companies to act more aggressively on hate speech and disinformation have reached an echoing din. More than 100 companies are boycotting Facebook, and Twitter and Reddit have both removed users and content. This will likely be well-received by MPs and peers backing the expedited implementation of the UK’s Online Harms bill – legislation that aims to designate a spectrum of ‘harmful’ online speech and regulate against it.

Policymakers often seem to harbour the belief that when the average person logs on to the internet, they are hit with a tidal wave of David Icke-style quackery and tirades about the mind-bending qualities of 5G – the potency of which will simply be too strong for most to resist. There is the pervasive idea that inadvertent exposure to this kind of disinformation leads to an erosion of trust in the government, media and science. But is this really the whole story?

The online harms white paper – setting out the vision of this brave new, more restricted, world – has attracted its fair share of flak – not least for taking a regulatory stance on speech that isn’t illegal (e.g. disinformation). You just have to tune into a parliamentary committee session on the topic – injected with calls for the removal of encryption, anonymity, and even criticism of the government’s coronavirus strategy from the internet – to become sceptical of the whole approach.

“The question is whether [social media] leads to extremism and polarisation and distrust – and causally, that is really difficult to disentangle because you don’t know what prior views people held,” says professor in social psychology at the University of Cambridge Sander van der Linden.

It is a complicated area of study, given it is almost impossible to track what beliefs people might have gleaned solely from YouTube, and those they might have picked up from any other area of their lives. In many cases, pre-existing distrust in government, media and institutions can feed into the phenomenon, creating an intensifying feedback loop. A study found that people who share one conspiracy theory, such as an anti-vax post, are likely to share others – suggesting that such beliefs stem from a world view in which everything is to be mistrusted. Another study found that conspiracy theories tend to circulate within groups of people that are already converts.

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

The narrative that all ills stem solely from social media is enticing because of where it lays blame – at the feet of tech giants for allowing it to happen. But this perspective skirts the need to delve into why someone might believe disinformation in the first place. Research in this area tends to focus on the role individual factors like personality, psychological, and demographic differences play. (Typically finding that older, right-wing males are more likely to share disinformation.)

Studies examining deeper structural factors are less abundant.

However, research published in the International Journal of Press/Politics in January 2020 found there are stark differences in how resilient different countries are to disinformation. It found that people from Finland, Denmark and the Netherlands were the most resilient to sharing disinformation. At the other end of the scale, the study found that the US populace is uniquely vulnerable to the transmission of false information.

The study found this was down to a collection of factors. In addition to polarisation and political views, trust in the media and the strength of public broadcasting in each country was decisive. US media is known for a high degree of partisanship and its public service broadcasting is virtually non-existent. On the 2020 Press Freedom Index, the US scored 45th out of 180 countries. Research finds that only 39% of left-wing Americans and 13% of right-wing Americans ‘mostly trust the news’.

Education levels play a part too, with the less educated more likely to share disinformation. Research finds that a lack of basic numeracy skills is linked to the likelihood someone would believe false health claims. There has been surprisingly little research into economic factors, but this could prove a ripe area of research given the US has much higher inequality than the likes of the Netherlands, whose population was found to be much less likely to share disinformation. Research on conspiracy theories has shown that the economically disenfranchised are more likely to be susceptible. All of this speaks to structural issues underlying the phenomenon of sharing disinformation that are generally neglected in the current discourse.

Of course this has serious implications for how you might tackle a problem like disinformation. In the country comparison study, the UK didn’t score too poorly, ranking fifth out of 18 countries. But it is important to remain vigilant on what could be growing risk factors. For example, trust in mainstream media is declining, according to recent research published by Oxford University’s Reuters Institute for the Study of Journalism. The proportion of people who mostly trust the media fell by 20% between 2015 and 2020. This drop was especially precipitous among left-leaning people – plunging to just 15% around the time of the 2019 general election – compared with 36% of right-wingers.

The research concluded that “even the most trusted brands like the BBC are seen by many as pushing or suppressing agendas, especially over polarising issues like Brexit”. Among the left-wing, an anti-Corbyn media bias likely played a factor. A 2016 LSE study characterised the British press as a Corbyn “attack dog”, to the extent that it raised “pressing ethical questions regarding the role of the media in a democracy”.

Reporters without Borders ranked the UK 35th in the world on the Press Freedom index in 2020. Incidentally, two out of the three countries that were most resilient to sharing disinformation – Finland and Denmark – also scored in the top three on this scale. The index noted that despite the UK assuming the role of co-chair of the new Media Freedom Coalition, its own domestic sphere provoked cause for concern. Some of the reasons included the treatment of WikiLeaks founder Julian Assange; the Met Police pursuing the publication of leaked diplomatic cables as a ‘criminal matter’; the use of Strategic Lawsuits Against Public Participation as a means to deter public interest reporting; and the Conservative Party’s threat to review the BBC’s licence fee during the general election.

James Harding, who was in charge of the BBC’s news operation for years before leaving to establish new media brand Tortoise, told The Guardian that while misinformation on social media was intensely discussed, the actions of governments were causing real damage to press freedom.

“For all the discussion of fake news, there is a much more pervasive problem of state news, which is the problem of governments and politicians encroaching on the media,” he said.

UK intelligence documents leaked by Edward Snowden showed that a GCHQ information security assessment listed “investigative journalists” in a hierarchy of threat actors alongside terrorists or hackers.

These trends are likely to feed into growing mistrust, and a greater susceptibility to disinformation in the long run. In addition, there is mounting evidence that approaching disinformation in the ways that are currently being championed – removing content and adding fact check labels – are not that effective at curbing kooky convictions.

Bertram Vigden, a Turing Institute research fellow in online harms, says that the effects of banning people – for example, Katie Hopkins from Twitter – could actually be detrimental. If the exiled head elsewhere, it can be harder for researchers to get the data they need to conduct studies on the spread of disinformation and hate speech.

“If we are banning stuff, then in a way we have failed because someone has still gone out and created the misinformation,” he says. “We need to think about interventions which happen at every single stage.”

There is also the complex issue of how to define disinformation. Vigden points out that in some cases, such as health information, it is very obvious when something is false.

“When it is something that is more political and involves subjective viewpoints, it is very difficult to have any kind of impartial body making those decisions and it is always going to be contentious.”

Who is tasked with fact-checking is important.

“We know that partisans, who really want to share their opinion, don’t trust fact checkers and they don’t change their mind because of fact checks,” says lead researcher on the country comparison study Edda Humprecht. “This only works for information that is not partisan.”

For example, Politifact, a fact-checking service used by Facebook, has been criticised as both anti-Republican by the right, and a tool to reinforce the centre-right political orthodoxy by the left.

For fact checks to work effectively, fact checkers need to be trusted.

“If you have a quality newspaper doing fact checks, but someone who reads alternative news thinks that professional news media only publish lies, then it won’t work,” says Humprecht.

Van der Linden also points out that retroactive fact-checking is not that effective either. Once someone has already seen disinformation, promoting a counter-narrative is much harder.

Fretting about social media regulation is simpler than interrogating societal issues, but if governments are serious about tackling the spread of disinformation, they would do well to look at the bigger picture. Mistrust isn’t bred by social media alone, it stems from a range of factors: economic, social and political. Sticking a fact-check label on a Facebook post is a short-term salve, but it doesn’t address the reasons someone might believe the world is lying to them.

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU