Google has announced plans to introduce a feature that will flag artificial intelligence (AI)-generated and AI-edited images within its search results as part of its wider efforts to improve transparency and combat misinformation.

The Alphabet subsidiary aims to launch this feature later in the year through its “About this image” tool, which will provide users with access to metadata on how an image was created, including whether AI was involved. This tool will be accessible through Google Search, Google Lens, and Android’s Circle to Search tool. It will allow users to determine if an image has been generated or altered using AI by selecting the relevant option from a drop-down menu.

“We believe it’s crucial that people have access to this information,” wrote Laurie Richardson, vice president of trust and safety at Google, in a blog post. As such, added Richardson, “we are investing heavily in tools and innovative solutions, like SynthID, to provide it.”

Google working with C2PA on flagging AI-generated content

SynthID, developed by Google DeepMind, embeds an invisible watermark into AI-generated images, allowing users to trace whether the content was created using AI, even if the watermark is imperceptible.

Google’s efforts are supported by its involvement with the Coalition for Content Provenance and Authenticity (C2PA), an industry group focused on developing standards to trace the origins of digital content. The company joined the C2PA earlier this year as a steering committee member, alongside companies like Adobe and Microsoft. Through this collaboration, Google has contributed to the latest version of the C2PA’s Content Credentials standard, which helps users track how a piece of content, such as an image, was created, including the tools used.

In addition to enhancing transparency, this move addresses concerns about the growing use of AI-generated content and its potential to mislead users. The introduction of these tools comes at a time when deepfakes and AI-manipulated images have become a key concern for industries and governments globally.

Google acknowledged that while the new feature provides valuable insights, it requires users to take extra steps to access the metadata, which may limit its visibility. The internet giant also plans to extend the use of C2PA metadata to its advertising systems and explore ways to integrate it into YouTube. Google is working on validating content against the forthcoming C2PA Trust list, which will help confirm the authenticity of a piece of content’s origin. “We also know that partnering with others in the industry is essential to increase overall transparency online as content travels between platforms,” said Richardson.

Search giant antipathy toward masses of AI-generated written content

Google’s approach to AI-generated content extends beyond visuals. The company emphasises the need for transparency and accuracy in AI-generated written content. According to its Search Quality Rater Guidelines, content created primarily to manipulate search rankings or mislead users will face penalties, regardless of whether it was produced by humans or AI. However, Google encourages responsible AI use to enhance, rather than replace, human creativity in producing valuable information.

Google’s measures to flag AI-generated content follow similar initiatives by other technology companies. Adobe and Microsoft have both adopted metadata standards to label AI-generated media.

Adobe’s tools, such as Firefly and Photoshop, incorporate Content Credentials, which embed metadata indicating how the image was created, providing details about the time, tools used, and modifications made. This helps users verify the authenticity of the content.

Similarly, Microsoft applies Content Credentials to all AI-generated images in Bing Image Creator, using invisible digital watermarks to confirm the time and date of creation.

Read more: Meet the CIOs that regret investing in AI