How serious is the threat from deepfakes? Key figures at a glance
A PwC cyber security study shows that in the past three years, around 90% of companies in Germany have been the target of data attacks (82% globally). Almost half of those companies suffered financial losses of up to USD 1 million.
By 2024, about half of all companies globally had already reported fraud cases involving both audio and video deepfakes. Different deepfake techniques are often combined, as illustrated by the attack on UK engineering firm Arup (see graphic). Identity fraud is considered the primary threat, with particularly high case numbers reported in Germany, Mexico and the UAE (Regula).
In absolute terms, the number of deepfakes in circulation rose from roughly 500,000 in 2023 to a projected 8 million in 2025 (Deepstrike).
The financial sector is especially vulnerable to deepfake‑driven losses and disinformation: it relies heavily on digital high‑value transactions, and the risk of eroding trust is substantial. But other sectors such as media and telecoms are also frequently affected.
In insurance, deepfakes are used not only in targeted attacks on IT and data infrastructure. They are increasingly being deployed to create realistic images and documents for fake claims, or to fool customer identification systems using manipulated voice samples or synthetic identities.
According to a Swiss Re report, litigation and liability claims are also rising significantly, as policyholders more frequently become victims of deepfake‑enabled fraud. Global losses from insurance fraud using deepfakes are already estimated to exceed USD 120 billion annually (Computerweekly).
How can organisations protect themselves?
On the regulatory side, the EU AI Act – being rolled out step by step from August 2024 – requires organisations to label AI‑generated and synthetic content in a machine‑readable way as artificial or manipulated (EU AI Act).
To ensure authenticity and integrity of synthetic media, cryptographic methods are used to bind the origin of material unambiguously to an identity. Current approaches focus on embedding digital signatures in the capture pipeline (image or audio) to prevent later unauthorised modification. So‑called “poisoning attacks” are used defensively to systematically detect and counter manipulation attempts in training data (BSI).
With legally robust digital signatures in mind, the planned 2027 rollout of the EUDI Wallet – for secure, EU‑wide digital identification via smartphone – is moving increasingly into focus.
Deep learning models are also being used on the defensive side: they detect anomalies and manipulated media using deepfake training datasets and known systemic weaknesses, and help build proactive protection mechanisms (Fraunhofer AISEC).
Equally important are comprehensive employee training programmes and deepfake simulations, alongside proactive analytics, real‑time reporting and rapid response playbooks.
Where do new opportunities emerge?
Both privately and professionally, it’s essential to engage seriously with the risks posed by deepfakes. At the same time, we shouldn’t overlook the upside of rapid technological progress.
AI technologies are already delivering significant efficiency gains in many organisations – through process automation, AI agents acting as virtual employees, or everyday helpers such as ERGOGPT. Communications and marketing teams are increasingly working with generative AI, enabling them to react faster to current events.
There are also new market opportunities in product management, for example in the development of cyber insurance products. Chatbots, voicebots and AI avatars are transforming customer interaction by providing 24/7 access to products and services.
Initiatives such as the EUDI Wallet are creating a European‑wide foundation for further growth in digital transactions and for better protection of our digital identities.
There is no longer any realistic path “around” AI. The technology is here to stay. All the more reason for us to learn early on how to live and work with it effectively and responsibly.
Text: Anna Kessler