Arup has reported losing $25m in a deepfake scam earlier this year. Threat actors convinced the engineering firm’s Hong Kong office to transfer the sum in several financial transfers during a video call after one of the attackers impersonated a senior manager. According to the city’s police force, the firm acted on a message they believed to have been sent from Arup’s UK-based chief financial officer. The cybercriminals remain unidentified.
“We can confirm that fake voices and images were used” in the scam, Arup told the FT. “Our financial stability and business operations were not affected and none of our internal systems were compromised.”
Subtle psychological pressure exerted on Arup employee
The term ‘deepfake’ usually refers to the AI-generated imagery capable of masking an individual’s face with that of another. Popular in mainstream media as a means to digitally rejuvenate older actors or perform impersonations, deepfakes can also be used in business email compromise (BEC) scams by cybercriminals to trick staff into transferring large sums into dummy accounts. In the case of Arup, scammers convinced a member of staff to make 15 transfers to five different local bank accounts before the engineering firm realised they were being defrauded.
According to senior superintendent Baron Chan, the gang exerted subtle peer pressure on the individual by inviting them to a video call that appeared to be populated by multiple senior leaders at Arup. “Because the people in the video conference looked like the real people, the [staff member] made [the] transactions as instructed,” Chan told reporters in February. “I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference.”
Deepfake fraud a new anxiety for banking CIOs
The investigation into what happened at Arup is ongoing, though no arrests have been made yet. News of the scam at the UK engineering multinational follows an incident earlier this month wherein cybercriminals unsuccessfully used a deepfake of Mark Read, chief executive of advertising firm WPP, to obtain money and personal information. Another case in January saw the image of Singaporean prime minister Lee Hsien Loong being used to promote fraudulent investment products.
Perhaps unsurprisingly, deepfake fraud has become a source of anxiety for CIOs, not least in the financial sector. Earlier this year, Cifas research director Sandra Peaston told the FT that banks were already being hit by deepfake fraud, albeit by criminals using faked videos of celebrities to bypass KYC checks. In time, Peaston warned, it “will require less and less material to train [deepfake software] and could be used on a more industrial scale,” allowing cybercriminals to potentially imitate anyone. Companies are girding themselves for this threat, it appears, with recent analysis from Gartner estimating that a third of enterprises will stop using deepfake detection methods in isolation to detect fraud from 2026.