Facial recognition company Clearview AI has been fined £7.5m by the Information Commissioner’s Office (ICO) for collecting and storing images of UK citizens for use in its software. The ICO has also issued a banning order which will stop the company from obtaining information about UK residents in future, and ordered it to delete all the offending records from its database.
The fine comes following a joint investigation by the ICO and the Australian Information Commissioner, which opened in 2020 and concluded last November. A provisional fine of £17m for the company had been mooted, but this appears to have been reduced. The ICO has yet to publish full details of the penalty and enforcement notice.
“People expect that their personal information will be respected, regardless of where in the world their data is being used,” said information commissioner John Edwards. “That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity.”
What is Clearview AI and why has it been fined by the ICO?
US-based Clearview AI has established a database of information on 20 billion people by gathering photographs and other personal information which is publicly available online. It then allows users of its software to input a photograph to see if it matches anyone on the database. Its software has been available to private companies and law enforcement agencies, and its website lists a number of case studies where it has aided investigations in the US at both a local and federal level.
The ICO investigation found that Clearview had uploaded images of UK citizens without their consent. “Given the high number of UK internet and social media users, Clearview AI Inc’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge,” the ICO statement says. “Although Clearview AI Inc no longer offers its services to UK organisations, the company has customers in other countries, so the company is still using personal data of UK residents.”
Furthermore, the investigation concluded that Clearview’s storing of the data was incompatible with GDPR, failing to meet the higher protection standards expected for biometric information such as facial images. People who inquired about their presence on the database were asked to hand over further personal details which “may have acted as a disincentive to individuals who wish to object to their data being collected and used.”
Edwards added: “Clearview AI has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”
Tech Monitor has approached Clearview AI for comment on today’s news. Speaking in November when the provisional UK fine was published, Hoan Ton-That, chief executive of Clearview AI, said: “I am deeply disappointed that the UK Information Commissioner has misinterpreted my technology and intentions,” adding that the company had “acted in the best interests of the UK”.
Global regulators crack down on Clearview AI
Today’s fine is the latest in a string of moves against Clearview AI by global data protection regulators and courts. In December last year, France’s privacy watchdog ordered the company to delete its data on French citizens, and in March, Italy’s regulator fined the company €20m and issued a similar order to delete citizen’s data.
Earlier this month, Clearview agreed it would stop selling its software to private companies in the US in the face of legal action being heard in a court in Chicago. This means its system is now only available to US public sector organisations.
Speaking to Tech Monitor before today’s ICO fine, Estelle Masse, global data protection lead at digital rights campaign group Access Now, said the fine in Italy was “a really strong and important signal … but this signal should also be heard by others”.
Public opposition to facial recognition is gathering steam across Europe and beyond, Masse said, and companies “should be thinking twice” before pursuing such technologies, due to the clear potential for human rights violations and the fact that regulators are “trying to get ahead of the debate”.
Some argue that the business model of Clearview AI and companies like it is fundamentally incompatible with the EU’s data protection regime. “There is a big issue in terms of how this business model can concretely adhere to GDPR principles and obligations,” says Stefano Rosseti, a privacy lawyer at NOYB. “I don’t really understand how that could even be possible.”
But there are signs that European facial recognition companies are taking note and moving their operations elsewhere. PimEyes, a Polish company that offers an online face search engine relocated to the Seychelles after a complaint was filed against it.