Japan is looking to regulate the development and use of artificial intelligence by taking a light touch approach to the technology in a bid to quickly capitalise on the potential of AI to solve some of the problems caused by its rapid population decline. It joins the likes of the US and the UK in favoruing a hands-off stance on the development of automated systems, but businesses are likely to follow the stricter rules proposed by the EU to ensure they can access what is a lucrative market.

Japan AI
The Japanese government wants light touch AI rules, but the contrasting EU approach is likely to win out (Photo: Faula Photo Works / Shutterstock)

Countries around the world are trying to determine the best approach to regulate artificial intelligence, particularly the more general purpose versions that power tools like ChatGPT and image generators like Midjourney. The UK and US favour a more light touch approach, focusing on safety research, international cooperation and guardrails rather than legislation.

In contrast, the EU has built a comprehensive and far reaching set of regulations through the EU AI Act, that includes requirements for foundation AI model developers to declare training data and minimise illegal or harmful content generation. This, according to some EU companies, poses a risk to their businesses as they wouldn’t be competing with non-EU companies on a fair or equal footing.

Japan’s AI regulation plans

Japan wants to more closely align its approach to regulation with that of the US according to Reuters, which cites sources familiar with the Japanese government’s plans. It is focused on boosting economic growth rather that regulation, and is also hoping to capitalise on its chip manufacturing prowess to train large language models. One approach being considered in Japan is to remove copyright restrictions from material used to train an AI model. This is in direct contrast to the EU which requires the declaration of any copyrighted material.

University of Tokyo’s Professor Yutaka Matsuo, the chair of the Japanese government AI strategy council, has described the EU rules as “a little too strict” suggesting the copyright of material used in deep learning as “almost impossible”. He told Reuters: “With the EU, the issue is less about how to promote innovation and more about making already large companies take responsibility.”

Global competition to become the home of AI regulation is speeding up. The UK will host an AI safety summit later this year where world leaders and industry will meet to debate the specifics of AI regulation and attempt to come to a global consensus. Meanwhile the EU is progressing at speed with the implementation of its AI Act and US President Joe Biden’s administration is exploring the extent to which AI should be regulated.

‘Wise and measured’ AI regulation needed

Paul Barrett, deputy director of the Center for Business and Human Rights at NYU Stern School of Business, told Tech Monitor the EU’s approach is “wise and measured” and will likely become the global standard. “Even if the US and UK fail to emulate the EU, I expect that major producers of AI apps and other products will conform to EU standards because they won’t want to lose out on the lucrative European market and it will prove inefficient to offer different versions of their products in different parts of the world,” Barrett says.

He believes innovation can still happen in a strictly regulated environment, and that the concept of “pro-innovation” proposed by the UK and under consideration in Japan “really means unregulated”. He cites the lack of regulation in the social media industry, which has led to “substantial negative side-effects”.

Barrett predicts the EU approach will work regardless of how much pushback it faces from companies or other countries. “Regulation is not necessarily an obstacle to innovation or profits,” he says. “Consider the car industry, which resisted environmental regulation for decades but eventually came up with innovative ways to reduce carbon emissions–and even eliminate them with the development of a vibrant and fast-growing electrical vehicle segment.  

“In the end, the EU will seem like the forward-thinking jurisdiction when it comes to oversight of AI.”

Monish Darda, CTO of contract AI company Icertis agrees, suggesting that there is a need for regulation. “AI regulation must be pragmatic,” he says. “It must support large and small companies and encourage experimentation. But at the same time, regulation must allow AI to be controlled in a way that protects basic principles of ethics and law, including privacy.

“The draft [EU] AI law has potential, and the world is watching with excitement, hope, and expectations that the law will do well. The world not only expects a regulatory milestone that adequately addresses all interests – it needs it.”

Read more: France wants to become Europe’s capital for AI