Last week, UK intelligence agency GCHQ revealed its ambition to use artificial intelligence (AI) to combat a number of social ills, including disinformation. Possible applications include detecting deep fakes, blocking botnets and automated fact-checking, a report from the agency said. But these proposals raise both practical questions and deeper quandaries, including who gets to define disinformation.
The UK’s security apparatus has become increasingly interested in disinformation and online speech in recent years, following intense focus on allegations that Russia attempted to interfere with the 2016 US presidential election using social media.
The Russia report, released by the Intelligence and Security Committee in July 2020, proposed a role for the UK’s security agencies – specifically MI5 – in policing social media content, something it termed ‘defending the UK’s discourse’. It highlighted that MI5 already works with social media companies to identify and remove terrorist content, so was the natural agency to take on an appended role covering disinformation.
The report suggested: “GCHQ might attempt to look behind the suspicious social media accounts that open-source analysis has identified to uncover their true operators (and even disrupt their use).”
Since then, military bodies have become more involved in the ‘fight’ against online misinformation. Nato has taken on a role countering Covid-19-related disinformation, and GCHQ is currently undertaking an offensive cyber-operation against anti-vaccine content purportedly spread online by ‘hostile states’. “GCHQ has been told to take out anti-vaxxers online and on social media,” a well-placed source told the Times.
Using AI to tackle disinformation
In the new paper, GCHQ’s hypothetical AI-enabled tools to tackle disinformation include machine-assisted fact-checking through validation against trusted sources, detecting deep fake media content, detecting and blocking botnets with machine-generated social media accounts, and online operations to counteract malicious accounts.
The proposals raise some practical questions. The major social media platforms – the main sites of disinformation spread – have already rolled out their own AI-enabled tools to detect disinformation. For example, to help identify viral misinformation, Facebook uses SimSearchNet++, an image matching model trained using self-directed learning, an AI system that can recognise images and text with only minor tweaks using a tool called ObjectDNA, as well as tools to detect deep fakes.
It is unclear what GCHQ would add by deploying its own tools – and indeed whether the social media platforms would permit it to do so. Facebook, for example, does not allow GCHQ or other security agencies to run such tools on its platform, although it does work closely with law enforcement in other areas. And taking action on social media accounts independently of the platforms could expose GCHQ to legal challenges.
But a bigger issue for GCHQ’s planned use of AI against disinformation could be the subjectivity of the concept, which has no legal basis in the UK.
[Keep up with Tech Monitor: Subscribe to our weekly newsletter]
Who defines disinformation?
The terms ‘misinformation’ or ‘disinformation’ can be easily abused by states to police civilian speech and restrict freedom of expression. A Broadband Commission report identifies that governments including Bahrain, Cambodia, Kazakhstan and Thailand have taken this approach.
“The paradox to highlight here is that governments that appear to be seeking to control speech for political gain try to legitimise their actions by referring to hate speech regulations and anti-disinformation laws,” the report reads. “In other words, disinformation responses risk being used (or justified for use) for censoring legitimate expression – and clearing the field for official disinformation to spread unchecked.”
Trisha Meyer, an assistant professor in digital governance and participation at Vesalius College of the Vrije Universiteit Brussel, stresses the need to distinguish between different types of ‘harmful’ material, especially between those that are legal and illegal. Rights groups have repeatedly urged legislators to steer away from woolly definitions of ‘harm’ when devising laws for social media platforms – for example, concerning the UK’s upcoming Online Harms bill.
Citizens should be wary of security agencies, whose activities are often closed to public or parliamentary scrutiny, policing public speech, Meyer adds. “I think there is a need for scrutiny… and accountability is important.” She points to parallels with GCHQ’s bulk surveillance programmes that were revealed in the Snowden leaks. “What happens if you get flagged as having posted harmful content and then you get put on a list?” says Meyer. “There’s a lot of that I do think we need to be concerned about.”
This lack of accountability is exacerbated by the use of AI technologies, which are often inscrutable even to the organisations using them. Platforms including Facebook, YouTube and Twitter have all warned that their increased use of AI for content moderation is likely to lead to mistakes. “We need to make sure that AI and automated technology isn’t an additional excuse of saying… that we’re unable to be held accountable,” says Meyer.
An even deeper issue is the fact that these approaches focus on how the problem of disinformation manifests, rather than its root cause. Meyer says that much of the discussion around disinformation focuses on making platforms responsible, rather than the “far deeper-seated societal issues” that might lead people to be susceptible to disinformation or distrustful of the state-endorsed narrative.
“If you simply take down content, that might solve it in the short term, but it’s not going to help from a longer-term perspective,” she says. “It’s not going to create that social cohesion or that trust that you need in government.”