Digitalisation & Technology, 18 March 2026

Reality or fake?

How deepfakes are stress testing our perception every day

ERGO Expertin Anna Kessler

Deepfakes now show up in our feeds almost daily – on TikTok, Instagram, YouTube, and via email and messaging apps. We’re confronted with fake headlines, phishing messages, cloned voices, and hyper‑realistic photos or videos, both at work and in our private lives. ERGO expert Anna Kessler gives an overview.

What’s worrying is that we often don’t spot these as fake immediately – and sometimes not at all. According to current figures from cybersecurity firm Deepstrike, humans correctly identify high‑quality AI‑generated videos only 24.5% of the time.

The tech brings massive new opportunities, but also a very real threat. Cyber‑attacks and fraud attempts using deepfakes are growing exponentially and are hitting all sectors worldwide – with the financial industry particularly exposed. Thanks to rapid advances in AI, deepfakes can now be created quickly and cheaply with relatively little training data, and injected straight into high‑reach media channels.

How are deepfakes defined?

The term “deepfake” combines “deep learning” and “fake” and points directly to the underlying AI tech stack. Using deep learning models, synthetic media is generated that looks and sounds uncannily real. Germany’s Federal Office for Information Security (BSI) groups deepfakes into three categories: video/image, audio and text.

1. Video / image

Over the past few years, several AI‑based methods for manipulating faces have become mainstream.

One is “face swapping”, where an image of one person is altered so that it appears to show another person’s face. Original facial expression, lighting and gaze direction are preserved. A relatively small amount of high‑quality image material is enough to train autoencoder models that analyse facial features precisely and can often recreate them in real time and high resolution.

Another method is “face reenactment”, where a person’s facial expressions and movements are altered so they appear to say or do things that never actually happened. A 3D model of the target person is generated from a video stream; combined with the video stream of a “manipulator”, it produces highly realistic facial expressions in real time. If these methods are used to generate synthetic people who don’t exist in reality, the BSI refers to them as “(pseudo) identities”.

2. Audio

According to the BSI, voice manipulation is typically done via “Text‑to‑Speech (TTS)” or “Voice Conversion (VC)”. With TTS, text is converted into an audio signal so that the same content is spoken in the synthetic voice of a target person. By contrast, voice conversion replaces a real audio signal in real time with an artificially generated voice. To the listener, it sounds as if the target person is actually speaking.

3. Text

Large Language Models can generate long, coherent texts within seconds, or continue them plausibly, with style and tone tailored to a specific audience and context. The ability to fake faces and voices in particular is becoming a growing risk for individuals and organisations. We see this in data‑phishing, disinformation campaigns, and in bypassing biometric systems used, for example, for remote identity verification (BSI).

There is no longer any realistic path “around” AI. The technology is here to stay. All the more reason for us to learn early on how to live and work with it effectively and responsibly.

Anna Kessler, Head of Innovation Consulting & Delivery at ERGO Group AG

How serious is the threat from deepfakes? Key figures at a glance

A PwC cyber security study shows that in the past three years, around 90% of companies in Germany have been the target of data attacks (82% globally). Almost half of those companies suffered financial losses of up to USD 1 million.

By 2024, about half of all companies globally had already reported fraud cases involving both audio and video deepfakes. Different deepfake techniques are often combined, as illustrated by the attack on UK engineering firm Arup (see graphic). Identity fraud is considered the primary threat, with particularly high case numbers reported in Germany, Mexico and the UAE (Regula).

In absolute terms, the number of deepfakes in circulation rose from roughly 500,000 in 2023 to a projected 8 million in 2025 (Deepstrike).

The financial sector is especially vulnerable to deepfake‑driven losses and disinformation: it relies heavily on digital high‑value transactions, and the risk of eroding trust is substantial. But other sectors such as media and telecoms are also frequently affected.

In insurance, deepfakes are used not only in targeted attacks on IT and data infrastructure. They are increasingly being deployed to create realistic images and documents for fake claims, or to fool customer identification systems using manipulated voice samples or synthetic identities.

According to a Swiss Re report, litigation and liability claims are also rising significantly, as policyholders more frequently become victims of deepfake‑enabled fraud. Global losses from insurance fraud using deepfakes are already estimated to exceed USD 120 billion annually (Computerweekly).

How can organisations protect themselves?

On the regulatory side, the EU AI Act – being rolled out step by step from August 2024 – requires organisations to label AI‑generated and synthetic content in a machine‑readable way as artificial or manipulated (EU AI Act).

To ensure authenticity and integrity of synthetic media, cryptographic methods are used to bind the origin of material unambiguously to an identity. Current approaches focus on embedding digital signatures in the capture pipeline (image or audio) to prevent later unauthorised modification. So‑called “poisoning attacks” are used defensively to systematically detect and counter manipulation attempts in training data (BSI).

With legally robust digital signatures in mind, the planned 2027 rollout of the EUDI Wallet – for secure, EU‑wide digital identification via smartphone – is moving increasingly into focus.

Deep learning models are also being used on the defensive side: they detect anomalies and manipulated media using deepfake training datasets and known systemic weaknesses, and help build proactive protection mechanisms (Fraunhofer AISEC).

Equally important are comprehensive employee training programmes and deepfake simulations, alongside proactive analytics, real‑time reporting and rapid response playbooks.

Where do new opportunities emerge?

Both privately and professionally, it’s essential to engage seriously with the risks posed by deepfakes. At the same time, we shouldn’t overlook the upside of rapid technological progress.

AI technologies are already delivering significant efficiency gains in many organisations – through process automation, AI agents acting as virtual employees, or everyday helpers such as ERGOGPT. Communications and marketing teams are increasingly working with generative AI, enabling them to react faster to current events.

There are also new market opportunities in product management, for example in the development of cyber insurance products. Chatbots, voicebots and AI avatars are transforming customer interaction by providing 24/7 access to products and services.

Initiatives such as the EUDI Wallet are creating a European‑wide foundation for further growth in digital transactions and for better protection of our digital identities.

There is no longer any realistic path “around” AI. The technology is here to stay. All the more reason for us to learn early on how to live and work with it effectively and responsibly.

Text: Anna Kessler


Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: radar@ergo.de


Further articles