European regulatory body the European Data Protection Board (EDPB) is launching a dedicated task force to regulate the security of ChatGPT. The unit will foster cooperation on possible enforcement operations between European agencies. The move follows the Biden administration’s announcement that it is increasing efforts to regulate the large language model (LLM) as well as Italy’s outright ban

European Data Protection Board launches ChatGPT taskforce
The European Data Protection Board has launched a ChatGPT regulation task force. (Photo by Jarretera/Shutterstock)

The EDPB announced it is organising a dedicated task force to coordinate possible regulatory actions by different nations within Europe. The move could indicate the start of cohesive regulation for the chatbot on the part of the European Union. This will be a lengthy process, however. 

ChatGPT, released in November by artificial intelligence research laboratory OpenAI, is an AI large language model (LLM) that is designed to answer questions with quick, realistic and in-depth answers. ChatGPT is one of the fastest-growing consumer applications in history, reaching more than 100 million active daily users in January. Governments and public figures have raised concerns over threats it could pose to privacy, safety and jobs.

According to a release from the EDPB, the regulatory body “decided to launch a dedicated task force to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.”

One national watchdog source told Reuters that the task force is hoping to align ChatGPT policy across Europe, but that this could take time. The source mentioned that members feel it is important for policies to be transparent, rather than restrictive.

Global regulation attempts

This announcement comes in the same week as several other reactions to the chatbot from governments around the globe came to light. Italy announced that, after its ban on the LLM earlier this month, it will need OpenAI to complete a “to-do list” of tasks to be compliant with GDPR, by April 30. 

The actions of Italy’s data protection authority, Garante, have been seen by some as a “test case” that could be followed by other data regulators throughout the EU. Indeed, the German government released a statement shortly after Italy’s ban on the chatbot. “In principle, such action is also possible in Germany,” Federal Commissioner Ulrich Kelber told local papers.

The Biden administration released a statement this week as well, outlining movements it has been making towards drafting its own ChatGPT legislation. The National Telecommunications and Information Administration within the US government’s Department of Commerce released a Request for Comment earlier this week in a bid to gather information on how AI systems such as ChatGPT could be regulated. 

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” a release from the administration explains. “For these systems to reach their full potential, companies and consumers need to be able to trust them.”

Read more: OpenAI given ‘to-do list’ by Italian data watchdog