What is Deep Fake?


Cybersecurity

Digitalisation & Technology, 16.10.2023

Deep fakes are a serious threat to society and business. On //next, we have talked about the risks several times. To counteract them, awareness measures, technological solutions, strict laws and the promotion of media competence are necessary, says Sima Krause from ERGO's Information Security Team.

What is Deep Fake? 

Deep Fake is a term used to describe technology that allows digital media content such as videos, images and audio recordings to be manipulated to appear genuine when in fact they are fake.

The term "Deep Fake" is derived from "Deep Learning" and "Fake" and refers to the use of artificial intelligence to create manipulated content.

Fields of application of Deep Fake

The technology is based on generative neural networks (GANs), which are able to recognise and reproduce certain patterns and features in data. In the case of Deep Fakes, GANs are used to change the appearance of faces in videos, imitate people's voices or even create completely fake content.

Common uses of deep fakes are

  • Face swap, 
  • Voice imitation,
  • Fake video creation,
  • Digital art and effects.

Deep Fakes can be misused to spread misinformation, violate individuals' privacy, create fake news and for other malicious purposes. It is therefore important to be aware of the dangers of deep fakes and to use techniques to detect and combat such content.

Risks of Deep Fakes

  1. Disinformation and manipulation:
    One of the most obvious risks of deep fakes is the spread of misinformation and manipulation. Politicians, celebrities and other influential people can easily become targets of deep fake attacks, where they are shown in seemingly genuine, scandalous or defamatory situations. This can undermine public trust in the truth and integrity of media and news.
  2. Fake News:
    Deep fakes are a potential weapon in the spread of fake news. By using credible faces and voices, they can make false stories and events believable and thus mislead the public. This can have a significant impact on political stability and social order.
  3. Violations of privacy:
    The technology behind Deep Fakes makes it possible to portray people in intimate or compromising situations without them ever having been there. This is a significant violation of privacy and can have a lasting impact on the lives of those affected.
  4. Fraud and identity theft:
    Deep fakes can be used for fraudulent purposes. Cyber criminals could use fake videos or audio recordings to impersonate someone else and gain access to sensitive information or cause financial damage.
  5. Political instability:
    Deep fakes could also increase political instability as they can be used to discredit political leaders and influence public opinion. This could lead to tensions and conflicts.
  6. Loss of trust:
    The spread of deep fakes can significantly undermine trust in media and digital content. People may become sceptical of everything they see and hear online, making communication and information sharing more difficult. 

More information on Deep Fakes

Information video of German BSI:

https://www.youtube.com/watch?v=JEa4VPskOn0

An example of a Deep Fake Video:

https://youtu.be/cQ54GDm1eL0

Further readings on //next

Deep Fakes: My digital twin is just giving my talk

https://next.ergo.com/en/AI-Robotics/2022/Deepfake-avatar-deep-fake-digital-twin-Mark-Klein.html

Deep Fake? The potential of synthetic media

https://next.ergo.com/en/AI-Robotics/2021/synthetic-media-Deep-Fake-computer-vision-machine-learning-GAN.html

Your opinion
If you would like to share your opinion on this topic with us, please send us a message to next@ergo.de.

Related articles

Digitalisation & Technology 09.02.2022

Cyber resilience

The recurrent vulnerability of IT systems was again exposed recently when the world held its breath about the vulnerability of log4j in mid-December. Can artificial intelligence (AI) provide greater support in cyber security in future? Marc Wilczek, cyber expert and Managing Director of Link11, reveals more about this in an interview.

Digitalisation & Technology 30.10.2023

How can we recognise AI-generated content?

In the age of artificial intelligence, we are faced with a growing challenge: How do we differentiate truth from fake when lines blur between reality and deception? In a world where deepfake videos and computer-generated images can mislead even the most critical eye, it is crucial to explore innovative solutions that enhance our ability to identify machine-generated content.

Digitalisation & Technology 18.01.2023

Artificial intelligence: Can we trust it?

Just over a year ago, the Zentrum für vertrauenswürdige Künstliche Intelligenz (ZVKI, "Center for Trustworthy Artificial Intelligence") was founded in Berlin. The focus of its work is the protection of consumers: How can it be ensured that AI solutions are trustworthy? How can fair, reliable and transparent AI systems be identified and promoted? ZVKI experts Jaana Müller-Brehm (left) and Verena Till provide answers.