In a landmark ruling last month, the UK Supreme Court decided that Uber must recognise its drivers as workers – striking a blow to the gig economy model. Regulators may now turn their attention to the ‘algorithmic management’ practices of digital labour platforms, which have “very negative implications” for workers, according to the International Labour Organisation (ILO). 

algorithmic management
Digital labour platforms produced revenue of more than $52bn in 2019 globally, but the way they manage workers is detrimental to their well-being, according to the ILO. (Photo by Tomohiro Ohsumi/Getty Images)

Digital labour platforms have boomed during the Covid-19 pandemic, as millions were laid off and consigned to their homes. But the fundamental practices underlying these platforms are increasingly in question.

A sprawling new report from the ILO reveals that algorithmic management is integral to digital labour platforms, but that this practice can contribute to an erosion in workers’ rights and quality of life. The report calls for the practice to be fundamentally changed – putting another cornerstone of the gig economy model in jeopardy.

The ILO found that the past decade has witnessed a fivefold increase in the number of digital labour platforms, from 142 in 2010 to over 777 in 2020. This includes platforms for web-based work like copywriting, as well as location-based work such as taxi apps and on-demand odd-jobs or food delivery platforms. Digital labour platforms produced revenue of more than $52bn in 2019 globally, and recent surveys suggest that up to a fifth of the adult population has performed platform work. 

With their rise in prominence, digital labour platforms are disrupting the norms of the labour market and employee relations. While traditional recruitment is typically based on experience and qualifications, digital labour platforms workers are generally assigned jobs by algorithms that take into account ratings, client or customer reviews, rates of cancellation or acceptance of work, and worker profiles.

This means that workers are less able to turn down work, as to do so would negatively impact their ratings and ability to secure more jobs, the ILO report finds. A global survey of gig workers by the ILO found that 37% of app-based taxi drivers and 48% of delivery drivers are unable to refuse or cancel work without repercussions. 

 

Algorithmic management and employee surveillance in the gig economy

The algorithmic management of workers incorporates near-constant surveillance too, where workers on location-based platforms (such as Uber) are monitored on GPS, and web-based workers are monitored by software that captures screenshots of their screens or keyboard strokes. 

“When you think of algorithmic management and see the continuous tracking of workers’ behaviour, constant performance evaluations, automatic implementation of decisions without human intervention, workers interacting with the system, rather than humans, and low transparency – all these things can have very negative implications,” said senior economist at ILO, Janine Berg, at an online event this week. 

Berg cited the example of AnnTaylor, a retail company that implemented automated scheduling software, meaning employee hours were dictated by an algorithm. Employees were no longer able to ask managers to change their hours, eroding the opportunity for flexibility.

Speaking to the Wall Street Journal, director of store operations Scott Knaul said that giving the system a nickname, Atlas, “was important because it gave a personality to the system, so [employees] hate the system and not us”. “It’s a way of using a technological system to impose discipline on workers, in a way that doesn’t allow [the workers to have a voice],” said Berg. 

Principal investigator of the Platform Labor research project Niels van Doorn has researched the Handy platform, a cleaning and odd-job gig economy platform. He found that the platform’s automated system would fine workers based on variables such as being late, cancelling the job, or leaving a few minutes early, while providing little means to push back. “This is very much a way of disciplining labour, a very historically old way of disciplining labour – but one that has been hidden behind the guise of technology,” said Berg. 

Regulating algorithmic management

To address some of these issues, the ILO’s independent Global Commission on the Future of Work recommends the development of an international governance system that would force platforms to sign-up to a blanket set of rights. With regard to algorithmic management, the ILO calls for a “human-in-command” approach, to ensure that “final decisions affecting work are taken by human beings”.

The most recent report recommends going further and introducing specific regulation for AI, including forcing companies to share proprietary algorithms where appropriate. “A blanket prohibition on access has serious potential implications for the pursuit of legitimate public interest objectives such as combatting discrimination and protecting consumers and workers,” it reads. 

Some jurisdictions, including Australia, China, the EU, Japan, Singapore, and the US, have started to develop regulatory frameworks for AI, which could demand more accountability, transparency and protection to safeguard against the negative impacts of AI. The report suggests that governments consider pushing policies that favour open-source technologies and open algorithms for inspection by regulatory authorities.