View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Google AI learns to identify ‘consensus’ on controversial topics

To prevent misinformation from appearing in 'featured snippets', Google is using AI to identify consensus among sources.

By Ryan Morrison

AI engineers at Google have developed a machine learning model that identifies the consensus on controversial topics, the company said this week. The search giant is using the system to prevent misinformation being included in the “featured snippets” that appear at the top of its search results.

Google AI
Google is cracking down on misinformation in its search results. (Photo by eddygaleotti/iStock)

To reduce the risk of misinformation appearing in featured snippets, Google analyses relevant sources using its ‘multitask unified model’ (MUM), a ‘multi-modal’ natural language processing technology it developed last year.

Using MUM, “our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact,” wrote Pandu Nayak, Google’s VP of search, in a blog post.

The content of featured snippets can now be checked against “high-quality sources on the web, to see if there’s a general consensus for that callout, even if sources use different words or concepts to describe the same thing,” Nayak explained.

“We’ve found that this consensus-based technique has meaningfully improved the quality and helpfulness of featured snippet callouts.”

The new system also reduces the chance of misleading snippets appearing in response to queries based on a false premise. “A recent search for ‘when did snoopy assassinate Abraham Lincoln’ provided a snippet highlighting an accurate date and information about Lincoln’s assassination, but this clearly isn’t the most helpful way to display this result,” Nayak wrote.

The latest update uses MUM to identify such search queries, and reduces the chance of snippets appearing by 40%.

Google vs misinformation

Nayak outlined some additional measures to help promote ‘information literacy’ among its users. These include a new ‘About this result’ feature, that pulls in information from Wikipedia about sources linked from search results, and ‘content advisories’ that alert users when Google does not have confidence in the quality of information in its results.

Content from our partners
The growing cybersecurity threats facing retailers
How to integrate security into IT operations
How Kodak evolved to tackle seismic changes in the print industry and embrace digital revolution

These content advisories were originally developed for breaking news stories, but will now be rolled-out for all search results.

The updates are part of a concerted effort by Google to crack down on misinformation on its platforms. “Google was built on the premise that information can be a powerful thing for people around the world,” wrote Nayak.

“We’re determined to keep doing our part to help people everywhere find what they’re looking for and give them the context they need to make informed decisions about what they see online.”

Other measures include a $75m investment by the Google News Initiative to help develop media literacy, Nayak wrote, and a new project to develop lesson plans for school pupils on information literacy.

Read more: AI vs misinformation: Fighting lies with machines

Topics in this article: ,
Websites in our network
NEWSLETTER Sign up Tick the boxes of the newsletters you would like to receive. Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
I consent to New Statesman Media Group collecting my details provided via this form in accordance with the Privacy Policy
SUBSCRIBED
THANK YOU