View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
June 1, 2023updated 29 Jun 2023 11:18am

EU commissioner calls for AI code of conduct ‘within months’

A code of conduct would help companies prepare for the introduction of AI legislation.

By Ryan Morrison

A new “AI code of conduct” should be introduced within months, the European competition commissioner has said. Magrethe Vestager, who has led many of the Commission’s investigations into the behaviour of Big Tech companies, wants both the EU and the US to push a voluntary code for the AI industry as an interim measure until new laws can be drawn up to regulate the powerful technology. Convincing the White House may be an uphill battle as some US officials aren’t convinced by the EU approach.

EU AI
The EU’s Margrethe Vestager says an AI code of conduct is required (Photo by Thierry Monasse/Getty Images)

The EU’s new AI Act is currently going through the legislative process and includes strict guidelines governing biometrics, imposes transparency requirements on AI and bans facial recognition in some public areas. It is the first comprehensive AI legislation outside of China.

Vestager told reporters during an EU and US trade council meeting in Sweden on Wednesday that “we need to act now”. Speaking of the AI Act she said “in the best of cases it will take effect in two and a half to three years time.” That is “obviously way too late,” she warned. 

The surprising success of ChatGPT, following its launch in November last year, has sparked an AI revolution, with companies like Google, Microsoft and Salesforce changing business models and adding generative AI to products. This in turn prompted governments to consider the implication of foundation model AI on national security, jobs and intellectual property rights.

An agreement is needed on specifics and not just general statements about the risks, Vestager warned. She added that the US and EU should drive the process and not rely on companies alone. “I think we can push something that will make us all more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds.”

Industry should be involved in the process of creating a code of conduct and it should happen as quickly as possible. “This is the kind of speed you need,” she said. It needs to be in “the coming weeks, a few months” rather than years and will give society faith in the technology.

Differences in approach between the EU and US

G7 leaders have been meeting to discuss the implications of AI, particularly when it comes to threats to national security through misinformation. They have called for the development of technical standards to keep it trustworthy. 

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

Companies are also working to improve the trustworthiness of AI. Google’s UK AI lab DeepMind published an “early warning” framework that can flag whether an AI model has the potential to pose serious risk to humanity and a group of industry leaders including OpenAI’s Sam Altman signed an open letter calling for urgent risk mitigation.

While the EU is pursuing a regulation-led approach to controlling AI, the Biden administration is split on the right approach to solving the problem. Some officials in the commerce department support similar legislation to the EU, but those in national security and the state department feel it would put the country at a competitive disadvantage.

Initially the US had looked to be following the EU in regulating the use of AI, particularly in high-risk areas such as the law and healthcare. It was part of an early framework for AI systems but the EU has since moved to tighten the rules around foundation model AI.

Individual EU countries are already using existing legislation to tackle the rise in AI. Italy banned ChatGPT until OpenAI complied fully with GDPR and Google has been slow to rollout new AI tools due to issues complying with GDPR.

Speaking to Bloomberg, US National Security Council spokesman Adam Hodge says the administration is working to “advance a cohesive and comprehensive approach to AI-related risks and opportunities.”

Read more: AI safety: industry leaders warn of ‘extinction risk’ for humanity

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU