Video calls as a new attack surface
On the dark web, real‑time deepfake services for video conferences are already being advertised from around 30 US dollars, including custom face and voice profiles. At the same time, open‑source projects give technically minded users everything they need at no cost. The barrier to using live deepfakes is extremely low – and they are already being used.In an early documented case from 2024, an international company in Hong Kong was deceived during a video call.
An employee transferred more than 20 million US dollars because they believed they were talking to their superior. It’s still unclear whether a full live deepfake was used or “only” cloned voices.
Since then, AI tooling has advanced rapidly. Today, convincing synthetic personas can be created with modest effort and budget. Unsurprisingly, cybercriminals are no longer limiting themselves to live deepfakes for payment instructions (the classic “CEO fraud”).
They are also using them to:
- Extract sensitive information
- Bypass video identification processes
- Circumvent biometric checks
This alone opens up a wide landscape of fraud scenarios. One worrying example is the live‑deepfake version of the “grandparent scam”: older people are pressured in real time by a criminal posing as a distressed relative, using an almost perfect imitation of their voice and appearance.
How can live deepfakes be detected?
The obvious question is: how can we reliably protect ourselves against this new form of digital deception?
At present, there is no foolproof defence against live deepfakes. Deepfake tools are evolving too quickly for any single detection method to remain effective for long. That makes open discussion and awareness critical.
Whether robust technical protection will exist in future is still uncertain. Some promising approaches are in development (for example, AI‑based authenticity checks and cryptographic media signatures), but they are in a race with the deepfake tools themselves.
In the meantime, a combination of technical scepticism and healthy procedural hygiene can help. Here are some practical warning signs for live video calls.
- Visual inconsistencies: We’ve all seen visual glitches when someone uses a virtual background: bits of hair disappearing, hands flickering in and out, or strange halo effects. Similar artefacts can appear on the face with live deepfakes: Edges of the face or hairline that flicker or blur, glasses, jewellery or teeth that look unnaturally sharp or unstable, lighting on the face that doesn’t quite match the room
Tip: If someone’s usual setup (camera angle, headset, room, background) suddenly looks very different with no obvious reason, politely ask about it. For example: “New office?” or “Different camera today?” The response – and their reaction – may be revealing.
- Uncanny facial expressions and behaviour: Deepfake faces are getting better, but micro‑expressions and natural timing are still hard to replicate perfectly. Warning signs include: Facial expressions that don’t match the tone or emotion, slightly delayed reactions or “frozen” smiles, behaviour that doesn’t fit what you know about the person (e.g. unusually informal, unusually pushy)
Tip: Ask open questions and steer the conversation into personal or unexpected territory – things only the real person should know or respond to naturally. This can be done in a friendly way, but may expose an imposter.
- Audio inconsistencies: Modern voice models sound good, but they’re still not perfect. Listen for: A voice that sounds “too clean” or slightly metallic, a constant tone with no breathing, room noise or microphone handling sounds, poor lip‑sync between audio and video
Tip: Live deepfakers will often try to minimise talking to reduce the chance of detection. If the “executive” on the call delegates most of the speaking to “assistants” or keeps answers very short, treat that as a red flag.
- Contextual and procedural red flags: Most successful frauds exploit not just technology but psychology and process gaps. Warning signs include: Requests that break normal rules (“Skip the usual approvals”, “Let’s keep this between us”), unusual time pressure (“We must complete this in the next 10 minutes”), strong emotional leverage (fear, urgency, flattery, threats)
Tip: If you can’t get a clear, plausible explanation for deviations from standard procedure or the urgency being imposed, end the call. Verify the situation via a second channel: for example, call the known mobile number, send a message via a pre‑agreed internal channel, or involve a colleague. A quick parallel phone call to the “real” person during the meeting can be extremely effective.
Final thoughts: trust needs rules
Large parts of the digital world are starting to feel unstable. What can we still trust? Who is genuinely “in the room” with us?
For companies and organisations, this is not a theoretical question. It has direct implications for fraud risk, compliance and reputation. The goal should be a culture of trust that is grounded in clear rules rather than blind faith in what appears on screen. For example:
- Dual‑control or four‑eyes principles for sensitive actions (payments, data access)
- Explicit “no exceptions” policies for process steps triggered over video, email or chat
- Agreed out‑of‑band verification methods for high‑risk decisions
When communicating externally, especially about the use of AI, transparency and clarity are crucial. If customers start to doubt whether they’re interacting with real people – or whether content is genuine – trust erodes quickly.
Deepfakes are here to stay. The task now is to adapt our tools, our processes and our habits so that trust remains possible – without being naive.
Text: Falk Hedemann