If you’re reading this article, you likely inhabit the same universe of facts as most other people. While your individual beliefs may differ from the next person, you probably share the same basic tenets on how the world actually works: that the Earth is flat, and gravity keeps us on it; that politics is equal parts planning and pratfall; that Covid-19 is real, and modern medicine is a good thing.
Then there are the people who believe the complete opposite. These conspiracy theories rarely gain mass acceptance, but they do have consequences. The previous year and a half has seen almost 80 mobile cell towers set ablaze by those who believe that 5G causes cancer, the Capitol building in Washington DC stormed by those convinced that the US presidential election was rigged by a secretive cabal, and lingering vaccine scepticism from the days of MMR hampering efforts to avoid a future of perpetual Covid-19 epidemics and lockdowns.
The claims behind these events will always find a ready audience in communities that lie at the periphery of society’s priorities and attention. But there are others still who believe a rumour not out of historical grievance, but simply because they’re disinclined to question the friend who told them, or because they’re gullible, or they run with the crowd or have been run down by jobs and family, or feel so deprived of hope that they are willing to place their trust in something, anything, that reverses their situation.
This is why Lyric Jain lost his grandmother early. “She read a really horrible thread on WhatsApp that she decided to believe,” says Jain, advising her to give up taking her cancer medication in favour of unproven, alternative treatments. “We probably lost her a bit earlier than we ought to have.”
That was in early 2016. Later that year, the misinformation that swirled around the Brexit referendum and US presidential election convinced Jain – then a student at the MIT Media Lab – that the world deserved an organisation that could act as an effective bulwark against all the lies, untruths and unsubstantiated rumour that permeate modern society. And so, in 2017, Jain founded Logically, a start-up that promises to track and refute misinformation at scale using a team of dedicated fact-checkers assisted by artificial intelligence.
Headquartered in Brighouse, West Yorkshire, the organisation has since experienced a string of successes. In 2019, Logically successfully discovered 50,000 false articles during the Indian general election, before partnering with police in India’s Maharashtra state to identify key sources of disinformation in the regional elections that followed. In April 2020, it mounted a joint investigation with The Guardian that exposed a former Vodafone executive as the architect of a disinformation campaign linking Covid-19 with 5G. Five months later, Logically repeated the feat by successfully identifying a prominent figure in the Q-Anon movement as an IT consultant from New Jersey.
The start-up isn’t the only one of its kind. The events of 2016 saw many like-minded organisations begin to experiment with using AI in thwarting misinformation. What they proposed was fiendishly difficult to put into practice, says Professor Sam Woolley, project director for propaganda research at the Center for Media Engagement at UT Austin. “Even asking an artificially intelligent program to suss out these kinds of articles with more success than failure is a tall order,” he says.
It’s a formula, however, that Jain believes is close to being cracked. While Logically continues to view AI as an assistive tool, its founder believes that we are closer than we think to using it to completely automating the fact-checking process to tackle misinformation. At that point, it may be possible to break the problem of scale that afflicts fact-checking the torrent of misinformation flowing through the internet. After all, millions of articles, blog posts, videos, images and messages zip and fly across the internet on any one day. According to one survey, there are only 188 fact-checking institutions currently monitoring this flow of information.
How Logically’s AI tackles misinformation
‘Falsehood flies, and the truth comes limping after it,’ wrote Jonathan Swift. From the start, Logically’s mission has been to make sure that misinformation never gets off the ground. The key to this, Jain believed, is moving faster than the rumour: compressing fact-checking into a process that lasts no longer than an hour.
For that to happen, Logically relies on the goodwill of strangers. It fields claims for fact-checking from two main sources: individual users, via its free mobile app and browser extension, and the digital platforms themselves. Crucially, fielding claims from users gives Logically crucial insight into what dis- and misinformation is circulating on encrypted messaging apps. It was especially “powerful having that whistleblowing network within India… exposing new and novel claims in the run-up to election day,” says Jain. What’s more, Logically’s fact-checks were actively shared across WhatsApp groups, thwarting disinformation about polling station locations like so many white blood cells fighting an infection.
All claims forwarded to Logically undergo a two-stage process: credibility assessment and veracity assessment. The first is entirely automated, using natural language processing and metadata analysis to prepare fact-checkers with useful context with which to verify the claims in the article. The actual verification process combines input from both AI and a human fact-checker, with the latter having the final say over misinformation. “Wherever feasible, we aim to provide fully automatic fact-checking services, but for complex claims, we would still need human support,” says Jain.
That situation should improve over time. Trained using a hodgepodge of different machine learning techniques, including unsupervised, semi-supervised, supervised and active learning, the AI will eventually process a critical mass of material, allowing it to make real-time, reliable judgements in almost every case of alleged misinformation. “Our overarching aim is to accumulate up to a billion facts in the next 12 months to be able to auto fact-check as many claims as possible,” says Jain.
Fact-checking the pandemic for misinformation with AI
Logically’s current, hybrid approach has made it a force to be reckoned with in the fact-checking world. But it isn’t fool-proof. Although the company has since acquired a reputation as a guardian against vaccine misinformation, the beginning of the pandemic broke its AI. “It believed anything to do with Covid-19 was automatically a conspiracy, just because it was a black swan event,” says Jain. “The model didn’t know how to handle a pandemic, like most people didn’t know how to handle a pandemic.”
[Logically’s AI] believed anything to do with Covid-19 was automatically a conspiracy, just because it was a black swan event. The model didn’t know how to handle a pandemic, like most people didn’t know how to handle a pandemic.
Lyric Jain, Logically
Jain and his team have updated up Logically’s algorithm, so it leans more heavily on recent news events in making its analysis. Even so, it still stutters every once in a while. While the model now performs to a high standard when assessing Covid misinformation, Jain explains, it is less capable in dealing with claims specific to more niche areas, like financial markets. These, he says, “need slightly different knowledge concepts, ontologies, for our capabilities to be able to work well”.
There are also some subject areas that Logically refuses to get drawn into. While users of its mobile app are free to flag claims for fact-checking that are political in nature, Jain is adamant that the start-up will not partner with political organisations. Aside from being a commercial dead end, it would also risk Logically being “used as someone else’s pawn” for partisan ends.
It could also make it more of a target. Propagators of misinformation do not take kindly to public censure, a lesson Jain learned early when hackers using Russian IP addresses attempted to attack Logically’s servers (“I don’t want to attribute that as Russian state actors,” says Jain. “That was way back in 2018, when we really [had] not made a dent, or pissed anyone off just yet.”) There have been death threats too, prompting the start-up to invest in anonymisation technologies so that staff cannot be tracked.
“It is work that will ruffle feathers,” says Jain, with a tone that suggests he’s argued that particular case with himself a thousand times or more. “It’s a risk that we are knowingly taking.”
AI versus AI
It’s also one that seems increasingly necessary. Despite his scepticism of the claims many start-ups make about AI-driven fact-checking, Woolly believes it will inevitably play a central role in moderating societal discourse. Once we accept that, we must then confront the biases that shape our understanding of the truth. “How do we build AI that is encoded with the best elements of humanity, rather than it being encoded with the worst?” says Woolley.
How do we build AI that is encoded with the best elements of humanity, rather than it being encoded with the worst?
Sam Woolley, UT Austin
Then there’s the question of how technology will be used to spread misinformation. In the aftermath of 2016, Woolley says, there was widespread alarm about the use of social media bots to give a false impression of the popularity of certain claims. If that relatively unsophisticated technology could wreak such havoc, one can only imagine the mileage disinformation sources will get out of NLP and deepfakes.
“Many of the propagandists that I’ve spoken to over the course of my time as a researcher in this field have told me that they don’t necessarily focus on trying to manipulate people all the time,” says Woolley. “They’re often trying to actually manipulate algorithms, and to manipulate the very AI systems that we’re discussing.” It is for this reason that, while it maintains a commitment to explainable AI, Logically will not publish the code underlying its verification tools. “The risk of weaponisation is very real,” says Jain.
It’s an answer that speaks to Jain’s ultimate ambitions for Logically. Recent years have seen rules around speech on the internet tighten across Western democracies, he argues, while the list of institutions the public trusts to verify information dwindles. With governments and platforms reluctant to stop the flow of misinformation, that leaves room for organisations like Logically to step in as neutral arbiters, a “kind of equivalent of an S&P or a Moody's, but for information”.
This may be overly optimistic, given the social media platform’s increasing sensitivity to misinformation and the fact that the baseline data processed by Logically’s verification AI is sourced from good old-fashioned journalism. There’s also the question of how many people actually pay attention to fact-checking.
“If I’m believing in a specific conspiracy, I’m not open to changing my mind,” explains Professor Dorit Nevo of the Rensselaer Polytechnic Institute. What Nevo and her colleagues have recently proved, however, is that AI-powered fact-checking can be crucial in maintaining a truthful narrative around breaking news events. Ultimately, this is where Jain sees Logically playing a key role.
“We need to build capacity to fact-check efficiently and at scale especially when situations with heightened public interest occur,” he says. “The role of automated fact-checking is to help provide that surge capacity when it's needed, help human-centric workflows be more efficient and make well-informed assessments on obvious and low-risk cases.”
Even so, Jain does not believe AI is a panacea. Automated fact-checking is necessary to combat the problem of scale that accompanies the fight against misinformation, “but these can’t be the only fact-checking efforts that are underway”, says Jain.
And yet, there may be times when this combined approach will still be defeated: an audio deepfake, perhaps, of a politician saying something that’s simultaneously heinous and also entirely unsurprising. In situations such as those, when we are confronted with new and compelling information that seems that it could, or perhaps should, be true, what is fact and what is fiction will ultimately come down to what we choose to believe.