A new “AI code of conduct” should be introduced within months, the European competition commissioner has said. Magrethe Vestager, who has led many of the Commission’s investigations into the behaviour of Big Tech companies, wants both the EU and the US to push a voluntary code for the AI industry as an interim measure until new laws can be drawn up to regulate the powerful technology. Convincing the White House may be an uphill battle as some US officials aren’t convinced by the EU approach.
The EU’s new AI Act is currently going through the legislative process and includes strict guidelines governing biometrics, imposes transparency requirements on AI and bans facial recognition in some public areas. It is the first comprehensive AI legislation outside of China.
Vestager told reporters during an EU and US trade council meeting in Sweden on Wednesday that “we need to act now”. Speaking of the AI Act she said “in the best of cases it will take effect in two and a half to three years time.” That is “obviously way too late,” she warned.
The surprising success of ChatGPT, following its launch in November last year, has sparked an AI revolution, with companies like Google, Microsoft and Salesforce changing business models and adding generative AI to products. This in turn prompted governments to consider the implication of foundation model AI on national security, jobs and intellectual property rights.
An agreement is needed on specifics and not just general statements about the risks, Vestager warned. She added that the US and EU should drive the process and not rely on companies alone. “I think we can push something that will make us all more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds.”
Industry should be involved in the process of creating a code of conduct and it should happen as quickly as possible. “This is the kind of speed you need,” she said. It needs to be in “the coming weeks, a few months” rather than years and will give society faith in the technology.
Differences in approach between the EU and US
G7 leaders have been meeting to discuss the implications of AI, particularly when it comes to threats to national security through misinformation. They have called for the development of technical standards to keep it trustworthy.
Companies are also working to improve the trustworthiness of AI. Google’s UK AI lab DeepMind published an “early warning” framework that can flag whether an AI model has the potential to pose serious risk to humanity and a group of industry leaders including OpenAI’s Sam Altman signed an open letter calling for urgent risk mitigation.
While the EU is pursuing a regulation-led approach to controlling AI, the Biden administration is split on the right approach to solving the problem. Some officials in the commerce department support similar legislation to the EU, but those in national security and the state department feel it would put the country at a competitive disadvantage.
Initially the US had looked to be following the EU in regulating the use of AI, particularly in high-risk areas such as the law and healthcare. It was part of an early framework for AI systems but the EU has since moved to tighten the rules around foundation model AI.
Individual EU countries are already using existing legislation to tackle the rise in AI. Italy banned ChatGPT until OpenAI complied fully with GDPR and Google has been slow to rollout new AI tools due to issues complying with GDPR.
Speaking to Bloomberg, US National Security Council spokesman Adam Hodge says the administration is working to “advance a cohesive and comprehensive approach to AI-related risks and opportunities.”