Landmark European Union legislation which could rein in the power of Big Tech companies moved a step closer to reality today. The principles of the Digital Markets Act (DMA) have been approved by the EU council, paving the way for details of the bill to be finalised in 2022. For post-Brexit UK, similar legislation remains at an early stage, and greater co-ordination between regulators is needed to ensure a level playing field for digital businesses, and a consistent approach to emerging technologies such as artificial intelligence, a House of Lords committee heard this week.
As reported by Tech Monitor, Europe’s DMA will apply new regulations to companies deemed digital “gatekeepers” – Big Tech companies such as Google, Amazon and Facebook which control what the EU describes as “core platform services”. This could refer to a search engine, social network or digital marketplace which consumers and businesses have to use to access or sell digital services. The legislation would put a host of new obligations on the gatekeepers around interoperability and use of customer data, as well as ensuring they don’t prioritise their own products over those of third parties.
The European Council has proposed minor changes to the initial text of the bill put forward by the European Commission, and further discussions will take place next year. “This shows that the EU is strongly committed to ensure fair competition online,” said Zdravko Počivalšek, Slovenian minister for economic development and technology (Slovenia is the current president of the European Council). “The proposed DMA shows our willingness and ambition to regulate Big Tech and will hopefully set a trend worldwide.”
UK tech regulation: co-ordinated action needed
The UK has shown signs it wants to follow this trend, and in August opened a consultation on its own digital markets legislation, which would deal with digital gatekeepers, referred to as companies with “strategic market status”, slightly differently, focusing on the behaviour of SMS businesses in certain markets, rather than their activities as a whole.
The House of Lords digital regulation committee is currently investigating how such legislation can be aligned with other bills to create a joined-up approach to tech regulation. At an evidence session on Tuesday, it heard from Chris Philp MP, minister for tech and the digital economy, who described the government’s approach to tech regulation as “pro-competition”, explaining: “We want to support the UK tech economy to develop in a competitive way, and don’t want to stifle it with excessive regulation that might stymie innovation or make the UK a less attractive place to do business compared to other places around the world.”
However, the committee was also told this light-touch approach has its perils. Tabitha Goldstaub, co-founder of CogX and chair of the UK AI Council, said that new research set to be released by the Alan Turing Institute will show “there really isn’t a common capacity across regulators” when it comes to dealing with technologies such as AI. She added regulators often lack the “cognitive skills, practical abilities and technical expertise” to carry out investigations which can judge whether technologies are being used appropriately.
With a host of different bodies involved in tech regulation, the government last year set up the Digital Regulation Cooperation Forum, which aims to ensure “a greater level of co-operation, given the unique challenges posed by regulation of online platforms.” It is comprised of the Competition and Markets Authority, the Information Commissioner’s Office, communications regulator Ofcom and the Financial Conduct Authority. However, as this forum is not a statutory body, its powers to make changes are limited.
Philp defended the role of DRCF. “There are clearly areas of overlap and ambiguity which can be resolved through the DRCF,” he said. “It strikes us as the right step for where we are today.”
Lizzie Greenhalgh, deputy director for digital regulation in the Department for Digital, Culture, Media and Sport, added: “The voluntary nature of [the DRCF] gives us agility and flexibility when we’re still at quite an early stage of how we govern digital technologies. But we did highlight in recent consultations around data and competition that we are looking at whether there would be merit in statutory systems to support co-ordination between individual regulators.”
OpenAI calls for attention on APIs
Elsewhere, the committee was told rules around the ethical use of AI must be developed to ensure the technology doesn’t follow the same path as the largely unregulated social media space.
“We predict [AI] will have a comparable impact to social media platforms in the coming decades, with very little attention being paid to how these systems are being used,” said Mira Murati, senior vice president of research, product and partnerships at OpenAI, the company which developed the powerful GPT-3 natural language programming AI model. “This is the time to understand the risks and opportunities before they become widely available.”
The principal risk comes from AI systems, such as GPT-3, which can be accessed via APIs, meaning that their developers don’t necessarily control how the end user deploys them. “At OpenAI we have policies that limit how people use the API and to limit any harmful behaviour by end-users and customers, but we are worried similar tools will proliferate via APIs which are less rigorously managed. UK agencies could study the issues around these APIs or fund work by researchers.”