AIセキュリティ

Anatomy of the Arup $25M Deepfake Heist: How a Fake CFO Video Call Cleared 15 Wire Transfers

2026-05-15Ryuta Hamamoto

A timeline-based dissection of the January 2024 Arup Hong Kong heist, in which deepfake video conferencing tricked a finance officer into wiring $25.6M. The three reasons the attack succeeded, parallels with Ferrari and WPP, and five defenses you can deploy starting tomorrow.

Anatomy of the Arup $25M Deepfake Heist: How a Fake CFO Video Call Cleared 15 Wire Transfers
シェア

I'm Ryuta Hamamoto from TIMEWELL.

“A finance staffer at the Hong Kong office joined a video call with the CFO. Several other senior colleagues were on screen too. The CFO said it was a confidential M&A matter and asked for 15 wire transfers, split across five accounts. The staffer executed the instructions. A week later, a call from London revealed every face on the call had been synthetic. Total loss: roughly USD 25.6 million.”

This is not a movie plot. It happened to British engineering firm Arup in January 2024. It became the textbook case for AI-driven business email compromise (BEC), the moment when “I saw them on video and heard their voice” stopped being a valid form of identity verification.

This article breaks the incident down on a timeline, explains why the attack succeeded, and lays out five defenses any organization can deploy starting tomorrow.

TL;DR — Three takeaways

  • Visual and vocal recognition can no longer authenticate a person in real time. AI-generated video and audio have reached the point where they are indistinguishable in live meetings
  • Attackers needed only a handful of seconds of publicly available video. Voice cloning and face cloning have collapsed to single-digit-minute setup
  • The real defense is process, not technology. Removing “I confirmed it over video” from your payment authorization criteria is the brave step that has to be taken

What happened — reconstructing the timeline

Phase Event
Pre-attack Attackers collected seconds-to-minutes of footage from YouTube, IR videos, and social media posts of Arup executives
Early January 2024 The finance staffer in Hong Kong receives an email purporting to be from the CFO requesting an urgent video meeting on a confidential M&A matter
The call The staffer joins a Zoom-style call. On screen: the London CFO and several other recognizable colleagues, all on video
Instructions The CFO asks that, for confidentiality reasons, standard approval flows be skipped and that funds be wired directly
Wire transfers By late January, 15 transfers totaling roughly HKD 200 million (USD 25.6M) had been sent across 5 different accounts
Discovery A call to the London office revealed the entire video meeting had been fabricated. Hong Kong police were notified

Hong Kong police later commented that similar tactics were being used against multiple companies, implying that Arup was the tip of an iceberg.

AI Security training, taken seriously

A 2-day intensive course fully aligned with OWASP, NIST, ISO/IEC 42001, and METI. Take it as executives, practitioners, or both.

Why the attack succeeded — three reasons

This is the part to repeat in every training session.

1. The CFO and the staffer had never met in person

The Hong Kong staffer had never met the London CFO face to face. The voice, mannerisms, and management style were all learned from public videos. The unavoidable structural consequence: the more visible a CFO becomes, the richer the training data for deepfakes. Every public company beyond a certain size lives with this trade-off.

2. The presence of multiple people created false comfort

A one-on-one call might have triggered second thoughts. But the meeting featured the CFO plus several other recognizable colleagues—all synthetic. Human cognition struggles to doubt a claim when multiple people make it together. Running several deepfakes in parallel would have been unthinkable a year before. It is now standard tradecraft.

3. “Confidential” was used to short-circuit normal process

“This is confidential—skip the usual approval flows” disabled the dual approval, line manager sign-off, and callback verification that should have stood between the staffer and a wire transfer button. Classic social engineering, paired with deepfakes, hollowed out governance in a single sentence.

Arup is not an exception

Deepfake-driven BEC has multiple documented cases since 2024.

  • Ferrari (July 2024) — A scam phone call impersonated the Ferrari CEO using voice cloning. An executive aborted it by asking, “What was the last book you recommended to me?”
  • WPP (May 2024) — An attempt to impersonate the WPP CEO via WhatsApp plus voice clone
  • PM Kishida deepfake (November 2023) — A subtitled deepfake video posing as a Japanese TV broadcaster spread on social media
  • Japan — Frequent impersonation ads using celebrities such as Yusaku Maezawa (former ZOZO CEO) in 2024-2025

Microsoft Digital Defense Report 2025 indicates detection of AI-generated forgeries grew 195% year over year. In Japan, deepfake counts reportedly increased 28x in 2024 versus 2023.

Five defenses to deploy starting tomorrow

1. Remove “verified by video call” from your authorization criteria

This is the headline. Explicitly strike “confirmed with management in a video call” from the conditions required to authorize payments, contract signing, or PII disclosure.

Replace it with:

  • Callbacks via a separate, identity-verified channel (corporate phone, internal chat with verified accounts)
  • Dual approval routed through your formal workflow system
  • Transactions above a threshold restricted to business hours in the company's headquarters time zone

2. Operate a shared secret between executives and finance

Families are increasingly using shared phrases for phone-based identity checks. Apply the same to your executive-to-finance flow:

  • A monthly-rotating combination of words and numbers
  • Memory-based questions never shared in writing (a recommended book, a pet's name)
  • Make it an explicit policy that executives themselves will confirm the shared secret if anyone asks for an urgent transfer over video

3. Recognize the trade-off between executive media exposure and training data

CFO and CEO video footage improves the quality of deepfakes the more it is published. Treat this as a strategic trade-off:

  • Distribute IR videos with subtitles and watermarks (reduce raw footage in circulation)
  • Vet executive video posts on social media in advance
  • Specify post-interview footage reuse limits in contracts with media

4. “Bypass normal process because this is confidential” is always a scam signal

Repeat this rule in employee training: real executives never tell you to bypass process. Build a payment system in which confidential matters can be approved through the normal flow plus an NDA—never by-passing the flow.

5. Layer in deepfake detection tools as a backstop

No tool offers complete detection today, but these can act as auxiliary signals:

  • Microsoft Video Authenticator
  • Intel FakeCatcher
  • Reality Defender
  • DeepBrain AI Detector

Use them as a complement to shared secrets and out-of-band verification—never as the primary authentication mechanism.

Two homework items for executives starting today

  1. Together with the CFO and CISO, audit the last 30 days of payments and flag every case where “a video meeting with management” was used as the authorizing basis
  2. Add to the next board agenda: redesigning identity verification for the deepfake era

This is not an IT problem. It is a board-level problem.

How WARP SECURITY treats this

TIMEWELL's WARP SECURITY training treats the Arup-style deepfake BEC as one of five simulated incident exercises.

In Executive DAY, leaders role-play “If this happened to us, who actually stops it?”. Participants experience the cognitive biases at play from the seats of the finance officer, the line manager, and the executive.

In Practitioner DAY, we move into detection technologies, payment system design reviews, and SOC alert design.

Summary

  • The Arup heist was the moment “I saw and heard them” collapsed as an identity proof
  • The attack lives at the intersection of payment processes and human cognitive biases, not in technology alone
  • Defense is a three-piece set: removing video confirmation from authorization, shared secrets, and executive mindset change
  • This is not an IT problem—it is an executive problem

If your first instinct in “deepfake defense” is to evaluate detection tools, you have started in the wrong place. Process first. Tools second. Otherwise, the wire transfer button gets pressed.

References

How well do you understand AI?

Take our free 5-minute assessment covering 7 areas from AI comprehension to security awareness.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About AIセキュリティ

Discover the features and case studies for AIセキュリティ.