The Department for Work and Pensions (DWP) is ploughing £70m into its digital transformation projects over the next three years. However, the National Audit Office (NAO) has raised concerns about investments the department is making in artificial intelligence (AI) models to detect fraud, suggesting these could exhibit bias against people with protected characteristics.
According to a new report from the NAO, which oversees public spending, DWP is investing £70m between financial years 2022-23 and 2024-25 in advanced analytics to tackle fraud and error. DWP expects this will help it to generate savings of around £1.6bn by 2030-31.
The government department is planning to use AI to identify patterns in welfare claims that could suggest fraud and error. The claims would then be reviewed either by relevant DWP teams, such as the Enhanced Review Team, before the claim enters payment, or the Targeted Case Review agents if it is already payment.
DWP wants AI to help bring in an extra £200m a year
In a separate report, DWP outlined an ‘ambitious’ new target to save £1.3bn in 2023-24 through tackling claimant fraud and error resources. The department will need to claim a further £200m in savings, following on from its previous £1.1bn target.
MP Tom Pursglove, minister of state for work and pensions, has been tasked with overseeing the fraud reduction effort, and said: “Our teams are working flat out to prevent new fraudulent claims and expose people who have been exploiting the system – with strong results.” But the minister explained that there was a need to go “even further” because of the changing fraud landscape.
“Working towards our ambitious new target over the next year will protect taxpayers’ hard-earned cash and enable us to deliver on the prime minister’s priorities to reduce debt and grow the economy,” he said.
Pursglove has been vocal about tackling benefit fraud and error; the total rate of overpaid Universal Credit payments currently sits at 12.8% (£5.54bn). The DWP report also references that Universal Credit underpayments were up, at 1.6% (£680 million) in 2022-23 from 1.0% (£410 million) in 2021-22.
UK government tightening up on controlling benefit fraud by digital transforming operations
The government website says a claimant can commit benefit fraud by claiming benefits they’re not entitled to either on purpose or in error. This could happen if they do not report a change in their circumstances that would affect the amount of money they are paid, or if they purposely provide false information to ensure they get higher rates.
DWP wants to use machine learning to reduce the rate of overpayments to claimants due to fraud and error, which it says has already fallen by 10% over the past year.
“Our tightened fraud controls and checks resulted in a significant reduction in fraud and error in the last year and now we are seeing, the tide start to turn,” Mel Stride MP, secretary of state for work and pensions said.
He continued: “Given that our welfare system exists to provide a strong financial safety net for the most vulnerable, it is imperative we continue to prevent anyone abusing this for their own profit, which is why we’re setting a new target to save £1.3bn in the next year and root out fraud wherever we find it.”
Last year, DWP launched a robust plan, ‘Fighting Fraud in the Welfare System‘, to drive down fraud and error from the benefits system. The plan sits alongside an investment of £900m over a three-year period. The department has also estimated that its full range of controls last year saved at least £18bn through benefit checks, controls and counter-fraud activities, contributing to Prime Minister Rishi Sunak’s pledge to reduce debt.
NAO says that DWP’s AI could be biased
DWP had been using a machine learning model to flag potentially fraudulent claims for Universal Credit advances since 2021-22. An advance is when a claimant doesn’t have enough money to live on while they wait for their first benefit payment of Universal Credit. Claimants can also get budgeting advances to pay lump sums on emergency household costs, getting a job or staying in work, or funeral costs.
According to NAO, DWP created the model by training an AI algorithm using historical claimant data and fraud referrals, which allowed the model to make predictions about which new benefit claims could contain fraud and error. The government department then developed and piloted four similar models for key areas of risk in Universal Credit, which include people living together, self-employment, capital and housing.
As of April 2023, 5.9 million people claimed Universal Credit, increasing by 200,000 since July 2022.
However, NAO wrote in its report that there was an "inherent risk" that the algorithms used by DWP's machine learning model were biased towards selecting claims for review from certain "vulnerable people or groups with protected characteristics". It said that this could be due to "unforeseen bias" in the input data or "the design of the model itself".
"When using machine learning to prioritise reviews there is an inherent risk that the algorithms are biased towards selecting claims for review from certain vulnerable people or groups," the report says. "DWP faces a challenge in balancing transparency over how it uses machine learning to provide public confidence in the benefits system with protecting its capabilities by not tipping off fraudsters about how it tackles fraud."
The NAO also said that DWP needed to provide assurance that it was not unfairly treating any group of customers because of its use of AI: "In response to the Committee of Public Accounts 2022 report on fraud and error in the benefits system, DWP committed to reporting annually to Parliament on its assessment of the impact of data analytics on protected groups and vulnerable claimants."
DWP has said that it has "established tight governance and control over its use of machine learning" and has put safeguards in place designed to assess the impact that using the model has on its different customers. However, DWP has said that its ability to test for unfair impacts across protected characteristics is currently limited, which it blames in part on claimants not providing information about their demographics when making a benefit claim.
NAO writes that "DWP also segregates personal data on its analytical platforms for security reasons and has yet to incorporate all the relevant data onto its fraud and error analytics platform". DWP has reportedly said it plans to do this soon.