Sign up for our newsletter
Technology / AI and automation

How to prepare for a new era of AI regulation

Businesses that use AI should be getting ready for the EU’s proposed regulations now, writes Alex van der Wolk, co-chair of law firm Morrison & Foerster's Global Privacy & Data Security Group.

In April this year, the European Union (EU) became the first political body to officially set wheels in motion for the regulation of artificial intelligence (AI) with the publication of its Proposal for a Regulation laying down harmonised rules on artificial intelligence. As governments pay increasing attention to the challenges and opportunities posed by AI and the impact it has on citizens’ lives, talk about the need for regulation has also been on the rise. Other countries, including the United States and Australia, are already working on introducing their own regulation around this technology, and they will be looking at the EU for guidance in this uncharted territory.

Many businesses are implementing or exploring AI, meaning that they have a right to be concerned about what this regulation will mean for them. However, as disruptive and innovative as the proposal is, it will not have the same impact on all aspects of AI systems. Understanding how the EU AI regulation will impact them in practice is of key importance for companies if they want to be prepared for when the regulation comes into force.

EU AI rules
European executive vice-president Margrethe Vestager unveiled the EC’s proposed rules on AI in April. (Photo by Olivier HOSLET/POOL/ AFP)

EU AI rules: what hasn’t changed

Businesses that handle large amounts of data may wonder whether the proposal will impact data privacy and information processing practices. The short answer is no. The new regulation will not change how businesses that use AI systems should treat their data. This aspect of the technology already falls under the EU General Data Protection Regulation (GDPR). GDPR applies to the processing of personal data in the context of an EU establishment, or when offering goods or services to, or monitoring the behaviour of, individuals in the EU. As GDPR applies regardless of the means – manual or by automation by which personal data is processed, businesses that employ AI technology do not need to be concerned that the new regulation will force them into making huge changes to the way they process data.

Instead, the EU AI proposal is designed to work in harmony with GDPR, acting as an extension of the base already established by GDPR. For example, GDPR already imposes specific requirements on profiling and automated decision-making, two common applications of AI systems often used to filter applications in recruiting processes.

White papers from our partners

Businesses using AI for this purpose – even if they acquired the system from a third party – must already adhere to legal requirements of fairness, transparency, and the right to human intervention. Under GDPR, companies acquiring AI systems from a vendor are obliged to impose contractual obligations to guarantee that the vendor’s AI technology is trained if not to prevent discrimination, at least not to knowingly discriminate.

What is new in the EU AI rules?

 The proposal introduces several new obligations that will have a significant impact on the market. The most notable are the prohibition of AI systems that it classifies as “very high-risk” – such as the use of facial recognition technology in the private sector – and the introduction of specific requirements for “high risk” AI systems and their users.

Additionally, the regulation’s extraterritorial scope means that the new requirements will not only impact the EU market, but also all companies that seek to do business in the EU or with EU-based businesses. Given the sheer size of the EU market, this will mean that many businesses will need to review what elements of AI are now classed as “high risk” and make considerations accordingly.

Failure to follow the regulation carries a significant cost to a business. In a similar vein to GDPR, non-compliance with the regulation could cost a firm up to €30m or 6% of its annual turnover.

How should businesses prepare for the EU’s AI rules?

Many organisations will be anxious to understand how this regulation will impact them. While the publication of the EU AI regulation proposal was without a doubt a ground-breaking moment, it will take a few years for the regulation to come into force. The proposal still has to be approved by the European Parliament and the Council of the European Union, both of which will debate and propose amendments. Once approved, these two legislative bodies will work with the European Commission towards finalising the regulation. From the moment a final document is agreed, a two-year implementation period will start, in addition to a 12-month transition period for AI systems that are already operating in the EU market.

Despite this long lead time, businesses should not ignore the changes that are on the horizon.

Despite this long lead time, businesses should not ignore the changes that are on the horizon. Some of the adjustments businesses will need to make may take months to implement, so organisations that use AI must start considering how their business may need to adapt to the changing regulatory landscape. When GDPR rolled out, many businesses were underprepared and scrambled to make themselves compliant mere weeks before the regulation came into effect.

While the very earliest the EU AI regulation will come into force is 2023, organisations that utilise this technology should make sure they monitor the development of these controls. Being prepared to adapt the use of AI within a business may require a thorough review of internal processes, so it’s vital that companies remain well-informed. Adapting to changing controls over the next two years can be a smooth process, but it will require preparation and diligent awareness of the changes that will come into play.

Alex van der Wolk is co-chair of law firm Morrison & Foerster's Global Privacy & Data Security Group.