Digitalisation & Technology, 7 May 2026

Deepfakes – now also ‘live’ in virtual meetings

What’s already technically possible today

Face Swap Deepfakes

When the Pope suddenly appears dressed like a rapper, or Sylvester Stallone somehow replaces Arnold Schwarzenegger in Terminator 2, most people quickly realise: that’s a deepfake. With these pre‑produced clips, you usually have time to fact‑check. A newer variant leaves little room for that: deepfakes in live video calls.

Deepfakes themselves are nothing new. They’ve been around for years, but the explosion of powerful AI tools has pushed them into the mainstream. Anyone can now create convincingly fake images and videos with minimal effort. Visual platforms such as TikTok, Instagram and Pinterest are being flooded with AI‑generated content. Much of it is playful or artistic – but deepfakes created with malicious intent are increasing fast.

Yesterday it was cat filters, today it’s near‑perfect deception

For a while, live virtual meetings felt like a relatively safe space. If you could see someone “in person” via webcam, that was good enough. At worst, filters or background effects could cause some mild embarrassment.

One famous example: a Texan lawyer who joined a Zoom hearing as a wide‑eyed kitten because a filter had been left switched on. While he frantically tried to remove it, he kept repeating: “I’m not a cat!” – but by then the clip had already gone viral.

This story is amusing, but it now has a serious twist. Until recently, hardly anyone felt the need to “prove” who they were in a video call. That’s changing. By around 2026, a video call on its own will no longer be a credible way to verify someone’s identity – in private life or in business.

The reason: widely available AI that can modify faces and voices in real time, making it possible to impersonate someone else in a live stream.

What’s already technically possible today

Live Face Swap: Tools such as Deep Live Cam and similar projects can replace faces in real time in a live video stream – including facial expressions and lip‑sync. They work with major platforms such as Zoom, Microsoft Teams and Google Meet, or via OBS VirtualCam in almost any video call.

Live voice cloning: AI models can clone a voice from just a few seconds of recorded speech and drive it in real time. Pitch, tone, accent and even language can be tweaked to sound more or less like the target.

Completely fabricated individuals: Attackers can combine facial and voice deepfakes and feed them straight into a call. The person on screen appears to be a genuine executive, colleague or family member, even though they don’t exist in that form at all.

Video calls as a new attack surface

On the dark web, real‑time deepfake services for video conferences are already being advertised from around 30 US dollars, including custom face and voice profiles. At the same time, open‑source projects give technically minded users everything they need at no cost. The barrier to using live deepfakes is extremely low – and they are already being used.In an early documented case from 2024, an international company in Hong Kong was deceived during a video call.

An employee transferred more than 20 million US dollars because they believed they were talking to their superior. It’s still unclear whether a full live deepfake was used or “only” cloned voices.

Since then, AI tooling has advanced rapidly. Today, convincing synthetic personas can be created with modest effort and budget. Unsurprisingly, cybercriminals are no longer limiting themselves to live deepfakes for payment instructions (the classic “CEO fraud”).

They are also using them to:

  • Extract sensitive information
  • Bypass video identification processes
  • Circumvent biometric checks

This alone opens up a wide landscape of fraud scenarios. One worrying example is the live‑deepfake version of the “grandparent scam”: older people are pressured in real time by a criminal posing as a distressed relative, using an almost perfect imitation of their voice and appearance.

How can live deepfakes be detected?

The obvious question is: how can we reliably protect ourselves against this new form of digital deception?

At present, there is no foolproof defence against live deepfakes. Deepfake tools are evolving too quickly for any single detection method to remain effective for long. That makes open discussion and awareness critical.

Whether robust technical protection will exist in future is still uncertain. Some promising approaches are in development (for example, AI‑based authenticity checks and cryptographic media signatures), but they are in a race with the deepfake tools themselves.

In the meantime, a combination of technical scepticism and healthy procedural hygiene can help. Here are some practical warning signs for live video calls.

  • Visual inconsistencies: We’ve all seen visual glitches when someone uses a virtual background: bits of hair disappearing, hands flickering in and out, or strange halo effects. Similar artefacts can appear on the face with live deepfakes: Edges of the face or hairline that flicker or blur, glasses, jewellery or teeth that look unnaturally sharp or unstable, lighting on the face that doesn’t quite match the room
    Tip: If someone’s usual setup (camera angle, headset, room, background) suddenly looks very different with no obvious reason, politely ask about it. For example: “New office?” or “Different camera today?” The response – and their reaction – may be revealing.
  • Uncanny facial expressions and behaviour: Deepfake faces are getting better, but micro‑expressions and natural timing are still hard to replicate perfectly. Warning signs include: Facial expressions that don’t match the tone or emotion, slightly delayed reactions or “frozen” smiles, behaviour that doesn’t fit what you know about the person (e.g. unusually informal, unusually pushy)
    Tip: Ask open questions and steer the conversation into personal or unexpected territory – things only the real person should know or respond to naturally. This can be done in a friendly way, but may expose an imposter.
  • Audio inconsistencies: Modern voice models sound good, but they’re still not perfect. Listen for: A voice that sounds “too clean” or slightly metallic, a constant tone with no breathing, room noise or microphone handling sounds, poor lip‑sync between audio and video
    Tip: Live deepfakers will often try to minimise talking to reduce the chance of detection. If the “executive” on the call delegates most of the speaking to “assistants” or keeps answers very short, treat that as a red flag.
  • Contextual and procedural red flags: Most successful frauds exploit not just technology but psychology and process gaps. Warning signs include: Requests that break normal rules (“Skip the usual approvals”, “Let’s keep this between us”), unusual time pressure (“We must complete this in the next 10 minutes”), strong emotional leverage (fear, urgency, flattery, threats)
    Tip: If you can’t get a clear, plausible explanation for deviations from standard procedure or the urgency being imposed, end the call. Verify the situation via a second channel: for example, call the known mobile number, send a message via a pre‑agreed internal channel, or involve a colleague. A quick parallel phone call to the “real” person during the meeting can be extremely effective.

Final thoughts: trust needs rules

Large parts of the digital world are starting to feel unstable. What can we still trust? Who is genuinely “in the room” with us?

For companies and organisations, this is not a theoretical question. It has direct implications for fraud risk, compliance and reputation. The goal should be a culture of trust that is grounded in clear rules rather than blind faith in what appears on screen. For example:

  • Dual‑control or four‑eyes principles for sensitive actions (payments, data access)
  • Explicit “no exceptions” policies for process steps triggered over video, email or chat
  • Agreed out‑of‑band verification methods for high‑risk decisions

When communicating externally, especially about the use of AI, transparency and clarity are crucial. If customers start to doubt whether they’re interacting with real people – or whether content is genuine – trust erodes quickly.

Deepfakes are here to stay. The task now is to adapt our tools, our processes and our habits so that trust remains possible – without being naive.

Text: Falk Hedemann


Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: radar@ergo.de


Further articles