Artificial intelligence (AI) technologies could soon shape and manipulate human decision-making in a new digital marketplace termed the “intention economy,” researchers from the University of Cambridge have warned. This development, detailed in a study published in the Harvard Data Science Review, raises concerns about the ethical and societal implications of commodifying human intentions.

The intention economy represents a shift from the current attention economy, where social media platforms profit from capturing user attention through targeted advertisements. According to researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI), the new model would see AI tools forecasting and influencing user intentions, ranging from consumer purchases to political choices, and selling that information to the highest bidder. Platforms like Facebook and Instagram have built their business models on keeping users engaged and monetising their attention. However, the intention economy could mark a significant evolution, treating human motivations as a new, highly lucrative commodity. “For decades, attention has been the currency of the internet,” said Jonnie Penn, a technology historian at LCFI.

The study outlines how large language models (LLMs), the AI technology behind tools like ChatGPT, are poised to transform this landscape. By leveraging users’ behavioural and psychological data, these systems could anticipate, steer, and even commodify human intentions. For example, AI assistants might suggest activities based on inferred desires, such as booking a movie ticket after detecting a user’s feelings of stress.

AI manipulation and personalisation

The research warns of potential misuse, with AI models dynamically generating personalised suggestions based on factors like a user’s political leanings, preferences, or psychological profile. Penn cautioned that without regulation, this technology could lead to widespread manipulation, affecting democratic processes, media freedom, and market fairness.

The study also highlights the risk of AI tools being used to manipulate conversations in the service of advertisers and other third parties. For instance, AI could subtly nudge users towards specific decisions or platforms by exploiting personal data gleaned from everyday interactions.

In one scenario outlined in the paper, companies might bid in real time to influence a user’s intent to book a restaurant, flight, or hotel. This would extend existing practices in the advertising industry, which already uses data to predict and influence consumer behaviour, but with far greater precision and personalisation. The paper cites examples of AI models already capable of advanced intent prediction. It references Meta’s AI model, Cicero, which demonstrated human-like abilities in the board game Diplomacy, a game heavily reliant on understanding and predicting opponents’ intentions. Such technologies, the researchers suggest, could be repurposed to steer user behaviour in commercial and political contexts.

The authors also quote Nvidia CEO Jensen Huang, who noted that AI models are increasingly adept at understanding user intentions and presenting tailored information. This capability, while potentially useful, raises questions about the ethical boundaries of AI-driven persuasion. Penn and his co-author, Yaqub Chaudhary, emphasised the need for regulation to prevent the exploitation of human intentions. They called for early intervention to safeguard fundamental societal values, including free elections and fair market competition.

Read more: Study reveals ‘alignment faking’ in LLMs, raising AI safety concerns