The Department for Work and Pensions (DWP) has come under scrutiny after an exclusive report by The Guardian revealed that its artificial intelligence (AI) system for detecting welfare fraud demonstrates biases. Despite assurances earlier this year that the system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers,” documents show significant gaps in fairness assessments.

While the DWP maintains that the final decision on welfare payments rests with human caseworkers, concerns have been raised over the algorithm’s role in potentially targeting specific groups unfairly. The system is part of efforts to reduce an estimated £8bn in annual fraud and error. However, no fairness analysis has been conducted to assess biases related to race, sex, sexual orientation, religion, pregnancy, maternity, or gender reassignment status.

Campaigners have criticised the DWP’s approach, accusing the government of prioritising implementation over thorough risk assessment. Caroline Selman, senior research fellow at the Public Law Project, said the findings suggest the DWP “did not assess whether their automated processes risked unfairly targeting marginalised groups.” Selman urged the department to “stop rolling out tools when it is not able to properly understand the risk of harm they represent.”

The DWP’s acknowledgment of disparities in how its AI system assesses fraud risks has intensified scrutiny of government use of automated decision-making tools. Advocacy groups and experts are calling for greater transparency, with the Public Law Project labelling the approach as “hurt first, fix later.”

Records indicate that public authorities in the UK employ at least 55 automated tools, potentially affecting millions of people. However, the government’s official register lists only nine. The Guardian recently reported that no Whitehall department has registered its AI systems despite a mandate for transparency earlier this year.

A separate disclosure also revealed that last month, a police procurement body under the Home Office sought bids for a £20m facial recognition software contract, reigniting debates about mass biometric surveillance. Peter Kyle, the Secretary of State for Science and Technology, has previously admitted to the publication that the public sector “hasn’t taken seriously enough the need to be transparent in the way that the government uses algorithms.”

The DWP has been reluctant to disclose specific details of the fairness analysis, redacting findings on how age, disability, or nationality impact the system’s fraud detection. Officials claim such disclosures could enable fraudsters to manipulate the system.

“Our AI tool does not replace human judgment, and a caseworker will always look at all available information to make a decision,” a DWP spokesperson, has been quoted by the British newspaper. “We are taking bold and decisive action to tackle benefit fraud – our fraud and error bill will enable more efficient and effective investigations to identify criminals exploiting the benefits system faster.”

Calls for more transparency and a reevaluation of AI’s role in public sector decision-making continue to grow, as concerns over potential biases and misuse of technology dominate the debate.

Automation advances face ethical challenges

The DWP has highlighted its efforts to reduce fraud and error through modern digital services and increased automation, according to its annual report and accounts for 2023-2024. The department stated that it continues to explore automation opportunities, including correspondence management and administrative tasks, to free up staff for direct customer support.

The department also reported the rapid testing of generative AI prototypes, including projects such as Aigent and a-cubed, aimed at supporting work coaches, updating legacy systems, and improving policy productivity. The department emphasised its commitment to the safe, ethical, and value-driven adoption of AI in its operations.

Despite these advancements, concerns persist over the department’s AI initiatives. The National Audit Office (NAO) has raised questions about potential biases in AI models used to detect fraud, particularly against individuals with protected characteristics. The NAO report highlighted DWP’s £70m investment in advanced analytics between 2022-23 and 2024-25, aimed at tackling fraud and error. DWP projects these efforts will result in savings of £1.6bn by 2030-31.

Read more: DWP’s fraud and error checking AI still displaying signs of bias