How can we recognise AI-generated content?


Deepfake & Co.

Digitalisation & Technology, 30.10.2023

In the age of artificial intelligence, we are faced with a growing challenge: How do we differentiate truth from fake when lines blur between reality and deception? In a world where deepfake videos and computer-generated images can mislead even the most critical eye, it is crucial to explore innovative solutions that enhance our ability to identify machine-generated content.

Arti 

Over the past 12 months, we have all seen AI-generated images. Some we found funny (The Rock eats rocks in Hard Rock Cafe), some allowed us to reminisce (#YearBookChallenge), and some have instilled fear in us (Trump in handcuffs). But my initial thoughts after experimenting with Midjourney were mainly: Oh, I hope the world is ready for this.

The internet taught us in the early chat rooms of the 90s (sometimes the hard way) that not everything online is genuine. And one of my favorite TV series characters, Howard Wolowitz from The Big Bang Theory, put it succinctly: "There is no place for truth on the internet." But it’s more complicated than this.

Markus Sekulla

Author: Markus Sekulla

Hi, I'm Markus. I'm a freelance management consultant in the field of creative/digital communication. In my free and working time, which is not always clear-cut, I like to focus on new work, trends, gadgets and sustainable iedas. In my real free time, I'm quite a health freak: eat, run, sleep, repeat.

Markus Sekulla on LinkedIn

Identifying fakes as such is becoming increasingly difficult

While writing this, I contemplated whether I should use "impossible" instead of "difficult." Because what can be altered with AI, such as voices, is no longer discernible to 99% of our ears.. The New York Times recently titled an article: "'A.I. Obama' and Fake Newscasters: How A.I. Audio Is Swarming TikTok." More fakes and manipulation are expected, especially as the 2024 US elections approach.

Today (October 2023), it is still often possible to expose AI-generated images or videos, as the fakes typically have small imperfections. In the aforementioned image of Donald Trump, one could discern that it was not real by looking at the hand of a security guard. However, these times are soon coming to an end. Our main tools for this are common sense and media literacy. Otherwise, more people might have gone shopping in the city center before Saturday Mass, as in the famous fake image of the Pope in Balenciaga.
I’m not alone finding this development very dangerous for democracy. Today, massively machine-generated content is used to deceive or manipulate people. To quote my aunt: "I saw it with my own eyes!" Furthermore, deepfake videos and fake images can undermine trust in the authenticity of media and content in general.

What could technical solutions look like? 

Technical protection mechanisms are becoming increasingly important in the face of the growing threat of AI-generated deepfakes. Many of the big tech companies are grappling with this issue, in part out of necessity, to curb the flood of fake news on their own platforms. One concept that is frequently discussed in this context is watermarking.

All PowerPoint giants among us are familiar with traditional watermarks from image databases, where the best photo is -of course- the one you have to pay for. Now watermarking involves hiding a signal in a text or image to mark it as AI-generated. Images can also be marked with watermarks by adding a nearly invisible overlay or including information in their metadata, making it difficult for us to detect at a glance.

But: Watermarks provide transparency and can help restore trust, but they are still relatively easy to bypass.

Protect images from manipulation by AI

Another interesting approach, primarily in preventing AI manipulation is the system developed by researchers called "UnGANable," which protects images from AI manipulation. The system introduces invisible noise on a mathematical-digital level into the images, preventing them from being read by AI systems and generating altered variations. It is based on the understanding that AI systems, especially Generative Adversarial Networks (GANs), transform images into "latent code" for manipulation. UnGANable disrupts this process by adding a "stealth" noise on the vector level. In tests, UnGANable demonstrated effective protection against common GAN systems and outperformed previous protection methods. The code is open source.

Quo Vadis?

In my view, the most crucial part of the technical solution is the labeling on social media platforms themselves. The technical solution must reach end-users to stop negative social impacts. A solution only recognizable by other machines seems insufficient. The moment platforms like X, BlueSky, Instagram, or TikTok can recognize that a just-uploaded image was machine-generated, it should receive a visible label, that all users can immediately identify. This is the only way to effectively protect misinformation from spreading in our minds. The same applies to other platforms like YouTube or Google Image Search. Google's DeepMind has developed and launched a solution, SynthID, which they say is not yet a universal solution.

Finally, the question of all questions remains: Was this text created with AI or not? ;)

Your opinion
If you would like to share your opinion on this topic with us, please send us a message to next@ergo.de.

Related articles

Digitalisation & Technology 09.02.2022

Cyber resilience

The recurrent vulnerability of IT systems was again exposed recently when the world held its breath about the vulnerability of log4j in mid-December. Can artificial intelligence (AI) provide greater support in cyber security in future? Marc Wilczek, cyber expert and Managing Director of Link11, reveals more about this in an interview.

Digitalisation & Technology 16.10.2023

What is Deep Fake?

Deep fakes are a serious threat to society and business. On //next, we have talked about the risks several times. To counteract them, awareness measures, technological solutions, strict laws and the promotion of media competence are necessary, says Sima Krause from ERGO's Information Security Team.

Digitalisation & Technology 18.01.2023

Artificial intelligence: Can we trust it?

Just over a year ago, the Zentrum für vertrauenswürdige Künstliche Intelligenz (ZVKI, "Center for Trustworthy Artificial Intelligence") was founded in Berlin. The focus of its work is the protection of consumers: How can it be ensured that AI solutions are trustworthy? How can fair, reliable and transparent AI systems be identified and promoted? ZVKI experts Jaana Müller-Brehm (left) and Verena Till provide answers.