Google says it will defend its enterprise customers against copyright claims if they use its artificial intelligence products to generate content. It joins companies such as Microsoft and Adobe in offering such an assurance, as Big Tech seeks to convince businesses it is safe to use generative AI.

Google already offered third-party indemnity against copyright claims for data used to train its models and is expanding that to generated output (Photo: Sundry Photography / Shutterstock)
Google already offered third-party indemnity against copyright claims for data used to train its models and is expanding that to generated output (Photo by Sundry Photography/Shutterstock)

Generative AI has become a big business recently, with the largest tech companies and IT services providers changing their business models to make it a core feature in their products. In some cases, they are offering generative image and text tools, and in others providing services for others to use AI models in their products.

But while vendors are confident their products will be widely adopted, many tech leaders still have concerns about security and copyright issues associated with AI, with some large organisations even banning staff from using ChatGPT and other popular tools.

There are multiple lawsuits in progress against generative AI labs like OpenAI and Stability AI over the use of copyrighted content in training AI models. To combat concerns that companies might face lawsuits over any content generated with these models, Microsoft, IBM, Adobe and now Google have offered indemnity or some form of protection.

Google Cloud said in a statement: “If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.” This applies to any product utilising Duet AI, its AI collaboration tool that is embedded across Workspace, the Google Cloud Platform and Vertex AI. 

There are two types of indemnity being offered by Google for enterprise customers using its Duet AI service, covering both data used by Google in training its models and output indemnity. The latter refers to any content created by customers in response to prompts or other inputs sent to Google services.

However, the indemnity doesn’t apply if a user were to use its product to “intentionally create or use generated output to infringe the rights of others.” And only applies when using Google’s products to generate what the company considers appropriate content.

“We hope this gives you confidence that your company is protected against third parties claiming copyright infringement as a result of Google’s use of training data,” wrote Neal Suggs, Google’s VP of legal, and Phil Venables, VP of IT Security for Google Cloud. “Put simply, regardless of the training data underlying all our services, Google indemnifies you.

“It means that you can expect Google Cloud to cover claims, like copyright infringement, made against your company, regardless of whether they stem from the generated output or Google’s use of training data to create our generative AI models.”

Read more: How real is the threat of data poisoning to generative AI?