Amazon’s leadership team, including founder and former CEO Jeff Bezos, are being sued by a shareholder for “allowing the mishandling of users’ biometric data,” such as fingerprints and facial images collected by its devices and services. Legislation around the use of biometrics around the world is patchy, and the outcome of the case, and other court actions being brought against Amazon, could go some way to establishing what is an acceptable way to store and deploy this sensitive information.

Amazon biometric data
Amazon is being sued in its home state of Washington over its use of biometrics. Pictured is its Seattle headquarters. (Photo by David Ryder/Getty Images)

The company is accused of profiting from individuals’ biometric data, something that violates privacy laws in Washington State, where Amazon has its global headquarters. Unusually, the legal action targets the senior leadership team – including Bezos and current CEO Andy Jassy – holding them responsible for allowing Amazon to violate the law and not speaking out against the moves.

In total, Amazon is facing at least 14 class-action lawsuits and 75,000 individual cases in relation to the use and collection of biometric data, according to the court papers from Stephen Nelson, the Amazon shareholder suing the company’s leadership team, filed in Washington. The outcome of these cases is likely to direct just how far data collection goes within the wider industry, with Amazon seen as a bellwether.

Nelson says the Amazon executives are liable for the consequences of those other lawsuits, as well as fines and legal fees that might result in any successful action. Court papers say the damages could be “astronomical to the point the company could be put out of business if the violations are not immediately addressed, stopped and remedied.”

A number of countries have laws protecting users’ biometric data. In Europe and the UK it is covered by GDPR, but often these regulations only protect the identity of the individual, says Imogen Parker, associate director of policy for AI research organisation the Ada Lovelace Institute, and not the potentially more intrusive intangible aspects, such as their likes and personality type predictions.

Biometrics “used to be a narrow thing,” Parker says. “It was the purview of police forces and government, but new technology has opened up opportunities for collecting and using biometric data in the private sector and across a wider number of industries.”

In the US there is no national legislation concerning how this information can be used, but 20 states have safeguards against the selling of biometric data or its collection without informed consent. Among them is Washington State where this latest case is set to be heard.

Biometric data use: a necessary advance or pseudo-science?

Amazon is not the only tech giant to have found itself in hot water when it comes to this kind of data. Microsoft, Google and Facebook’s parent company, Meta, have faced legal action accusing them of the misuse of biometric data gathered to support AI and identification services.

This week Clearview AI, the world’s most well-known facial recognition software company, agreed to stop selling its products to private companies in the US in the face of multiple lawsuits bought by privacy campaigners.

Parker told Tech Monitor that some use cases for biometrics border on pseudo science, with little scientific backing behind it. These include predictions over someone’s sexuality, personality type and even whether they are likely to be good at a job.

“We need to place more legal attention on use of biometrics as a lot of uses fall between the existing legal gaps, including the use of facial recognition in supermarkets or AI in job interviews,” Parker argues.

She said the lawsuits in the US and elsewhere had put a new focus on the ethics and legality of data collection, as well as the issue of consent for the use of these images.

In some cases, the images are gathered by the likes of Amazon through its Alexa smart speaker devices, gathering voice recordings to improve voice recognition, or images of faces to allow the device to recognise who has walked in a room and change the home screen to reflect their needs.

But the issue is how that data is stored, not just how it is collected, Parker says. “There is a problem around the societal impact of normalising facial recognition,” she says. “There are studies that have shown issues around discrimination, and it is being used in areas where people are already over-policed.”

Is new legislation required around biometrics use?

The Ada Lovelace Institute is due to publish a report on the use of biometric data in the coming months, and as part of the work on that report Parker said people talked about biometrics as being the most intimate form of data, as well as being a permanent data point as you “can’t easily change your face or fingerprint”.

Clearer legislation is required in this area, Parker says, and tacking biometrics on to upcoming laws, such as the UK’s replacement for GDPR, the Data Reform Bill, is unlikely to go far enough.

“There are also concerns about the increase in use, and there is a need for a wider discussion on legislation that restricts that so biometrics are only put in place when deemed appropriate and safe,” says Parker, adding that “now is the time for the government to introduce new comprehensive legislation to tackle the problem and ensure that it develops with appropriate checks already in place.”

Read more: The controversial rise of biometrics among the displaced