View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 13, 2023updated 23 Nov 2023 11:37am

Adobe and IBM among latest to sign AI watermarking code

As part of the White House voluntary AI code companies will have to watermark AI generated content and commit to tackling misinformation.

By Ryan Morrison

Adobe, IBM and Nvidia are among eight new companies that have signed up to US President Joe Biden’s voluntary AI governance scheme, the White House has announced.

Originally aimed at the big AI labs OpenAI, Anthropic and Google DeepMind, the initiative is designed to encourage signatories to uphold specific governance standards, including watermarking AI-generated content and “facilitating third-party discovery and reporting of vulnerabilities in their AI systems”. The news comes as a new study reveals business leaders are delaying AI projects due to a lack of guidance and clear regulation.

Exterior shot of the Adobe logo. Adobe is one of eight new signatories to the White House voluntary code. It commits them to watermarking AI content (Photo: Mats Wiklund / Shutterstock)
Adobe is one of eight new signatories to the White House voluntary code that, among other provisions, commits the company to watermarking AI content. (Photo by Mats Wiklund/Shutterstock)

The White House agreement also holds signatories to safety commitments such as testing output for misinformation and security risks. They are also expected to share information on ways to reduce risk with the wider community and invest in cybersecurity measures.

Other signatories to the voluntary code include CRM giant and owner of the Slack productivity platform Salesforce, US data intelligence company Palantir, Stable Diffusion generative AI model creator Stability, and AI labs Cohere and Scale AI. The earlier version published in July included signatures from Amazon, Anthropic, Inflection, Google, Meta, Microsoft and OpenAI.

Each of the organisations agreed to begin implementation immediately in work described by the administration as “fundamental to the future of AI”. The White House said at the time of the initial announcement that the initiative underscores three important principles for the development of artificial intelligence: safety, security and trust. This mirrors similar principles outlined in the UK AI White Paper published earlier this year.

“To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” the White House declared. “The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.”

It is thought that better regulation and governance will help speed up the adoption of AI across the marketplace. Some companies, including IBM, are working on tools to improve transparency and reportability. This includes tracking data used in training its models at every stage and reports on each stage of training. However, the Fact Sheet also urges companies to avoid publishing model weights and specifics until all security risks have been addressed.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Governance and regulation

The new voluntary code is seen by the White House as the foundation of a global agreement on AI development. Similar commitments to release models for third-party testing have been secured by the UK government. Indeed, the principles outlined in the Biden administration’s voluntary scheme are similar to those outlined by the UK as priorities for debate during its upcoming Bletchley AI Safety Summit. These light-touch approaches contrast with the more proscriptive attitude towards AI adopted by other jurisdictions, like the EU in its upcoming AI Act.

According to Forbes, the AI market size is projected to reach $407bn by 2027, with a CAGR of 37.3% to 2030. Despite this, there is lingering nervousness in enterprise about AI products and services, with governance cited as a key reason for delaying rollout.

An investigation by global legal practice DLA Piper found the current paucity of regulatory frameworks is a major reason for this anxiety among business leaders. Assembled from interviews with 600 senior executives at companies with an average annual turnover of $900m or more, the report also reveals that over a third of those surveyed were not confident that their firms are complying with current AI law, with another 39% unclear on how regulation is evolving.

Nearly half of respondents, meanwhile, said that AI projects have been interrupted, paused or rolled back due to data privacy issues and a lack of governance framework.

Responding to DLA Piper’s report, the Global AI Practice Group’s Jeanne Dauzier urged enterprises to be more cautious in rolling out new AI products.

“Two clear messages ring out from this research,” said the group’s co-lead intellectual property and technology lead. “First, there is an urgency to adopt AI – this is not an area where businesses feel able to wait and see. Secondly, [that] the need to ensure that the amazing opportunities in productivity and efficiency do not come at an ethical cost to the business and community.”

Read more: Adobe buys generative AI startup

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.