View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
February 3, 2021updated 29 Jul 2022 9:52am

Algorithms: The age of self-regulation could be ending

Companies have been left to regulate their own algorithms up until now, but regulators could soon be cracking down.

By Laurie Clarke

Last summer, a UK-wide movement of high school pupils united behind the rallying cry of “F*ck the algorithm!” The A-level students were outraged by the unfairness of a computational model that had disproportionately inflated the grades of private school pupils and underscored those from disadvantaged backgrounds. 

In January, the FTC took the unprecedented step of instructing a company to delete an algorithm. (Photo by Dan Kitwood/Getty Images)

Although at the time some were at pains to emphasise that it was not, technically, an algorithm, the debacle shone a spotlight on the black-box mechanisms that effect life-changing outcomes with minimal accountability. Scrutiny on algorithms has ratcheted up in recent years, but regulation has struggled to keep pace. However, the age of woolly ‘frameworks’, ethical commitments and self-regulation could soon be coming to an end. 

This year, the US Federal Trade Commission (FTC) took the unprecedented step of instructing an AI company to delete a proprietary algorithm. The photo storage company, Ever, covertly used billions of its customers’ photos to build facial recognition algorithms that it sold to law enforcement and the US military. In addition to the data it had misused, the FTC ruled that it had to delete the fruits of its data misuse too. 

“Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” commissioner Rohit Chopra wrote in an accompanying statement. “This is an important course correction.” Experts were quick to imagine how the same ruling might one day be applied to the algorithms of tech giants such as Google and Facebook.

The case is notable because the US has typically taken a laissez-faire approach to algorithmic regulation – largely leaving it up to companies to regulate themselves. Industry groups such as the Partnership on AI (including Microsoft, Amazon, Facebook and Apple) have been left to devise ‘best practices’ on ethics. Companies such as Google and Microsoft have developed their own principles and ethics advisory bodies with the ostensible aim of developing ethical AI. 

The subjectivity of ethics

But concerns about the limits of self-regulation are growing. A 2018 report from the AI Now Institute suggested that internal governance structures are failing to guarantee accountability for AI systems. And digital rights non-profit Access Now found that although the proportion of organisations that have an AI ethics charter leapt from 5% in 2019 to 45% in 2020, this tends to represent little more than “a branding exercise”, primarily due to the subjectivity of ‘ethics’.

Christo Wilson, associate professor in the Khoury College of Computer Sciences at Northeastern University, believes the FTC decision is an “important milestone”. “It represents a new tool in the FTC’s toolbox, and will hopefully have a deterrent effect on other companies planning to use similar datasets,” he says. “I hope we see more actions like this; for too long data collection practices have been unregulated, putting people at a significant disadvantage versus powerful tech companies.”

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

For too long data collection practices have been unregulated, putting people at a significant disadvantage versus powerful tech companies.
Christo Wilson, Northeastern University

But regulating algorithms is difficult. Ariel Ezrachi, Slaughter and May Professor of competition law at Oxford University, researches the impact of AI on markets. Assessing the impact of algorithmic decision-making in this context is extremely complex, he says. “The problem that you have with algorithms, and with these types of environments, is that some of the elements are visible… but a lot of the elements can be more subtle.” 

Ezrachi says that German and UK competition watchdogs are ahead of the curve on algorithmic regulation. Germany has already implemented specific rules for algorithm transparency for search engines and social networks, while the UK’s Competition and Markets Authority (CMA) recently published a report on how algorithms harm consumers and competition, proposing several regulatory interventions. The CMA is launching a dedicated Digital Markets Unit this year that will focus exclusively on the behaviour of online companies.  

But algorithmic regulation is so nascent that there aren’t universally agreed approaches yet. “For a simple algorithm, you may be able to identify what it’s doing just from the code… but more advanced algorithms require much more advanced analytics,” says Ezrachi. “There is also the issue of, how do you get hold of these algorithms? How do you know whether you have the totality of the analytics that actually govern the markets?” 

Measuring outputs

The Ada Lovelace Institute and DataKind UK have suggested carrying out a ‘bias audit’, which involves conducting a systematic analysis of inputs and outputs of a particular AI system. This can be effective in scenarios where regulators are unable to get their hands on the algorithms themselves. The European Commission’s investigation of Google’s comparison-shopping service analysed both input and output data to discover that the search giant was self-preferencing its rankings. 

However, there are some cases where either the inputs or outputs in an automated system are at least partially unobservable. Ezrachi says that this difficulty is highlighted in the case of ‘algorithmic collusion’, where more than one company either intentionally or passively collaborate to reduce the potential for competitive pricing. In intentional cases, companies arrange price cartels and use algorithms to achieve this –what is known as ‘partial automation’. In these cases, because multiple algorithms are interacting with each other, it can be possible to see the aggregate harm, but it will “be much more challenging to understand the role of each individual algorithm in manifesting such a harm”, according to the CMA report. 

When algorithmic systems have clear outputs, digital “mystery shoppers” can be used to interrogate them. The European Commission and the CMA have both experimented with this approach to identify consumer effects such as online market segmentation through personalised pricing. These studies make use of the personal data of real shoppers (such as online profiles, click and purchase history, device and software usage) to identify potential harms.

Another option is a randomised control trial that can be used to carry out an end-to-end audit, which the CMA says may be the most effective method to assess algorithmic harm. The CMA notes that “web-facing companies are frequently running large numbers of RCTs internally, and therefore may be able to easily support such an RCT for audit, depending on its exact nature”. 

In some cases, complex machine learning algorithms may not even be fully understood by the developers – hence the term ‘black box’. But the CMA suggests that this should not be accepted. Its report states that “in preparation for potential regulatory intervention, we suggest it is incumbent upon companies to keep records explaining their algorithmic systems, including ensuring that more complex algorithms are explainable”. The report also says that firms should be held liable for the harmful effects of their algorithms, whether intentional or not. 

Companies – especially the Big Tech cohort – will do everything they can to resist the rising tide of algorithmic regulation, Wilson predicts. “Companies will scream about negative impacts to ‘innovation’ and apply their powerful lobby to stifle attempts at regulation,” he says, stressing that sustained pressure will be necessary to counteract these efforts.

Wilson is also concerned about the implementation of ‘bad’ regulation, if regulators don’t listen to experts and stakeholders: “Ones that are cursory, don’t include mandatory transparency and auditing requirements, entrench tech monopoly power instead of diffusing it.”

But another crucial element is enforcement. “All the regulation in the world won’t help us if there aren’t mechanisms for enforcement,” says Wilson. “People working in the tech accountability space have a role to play here, pushing regulators to include accountability regimes and mechanisms for enforcement in whatever regulation comes to pass.”

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU