The dangerousness of deepfakes

Deepfake, or intelligent permutation of faces, is an image synthesis technique based on artificial intelligence. It is mainly used to superimpose existing images and videos on other images and/or videos. Some videos can be tampered with videos which are much more realistic. These deepfakes can also spread fake news and defamatory statement. Imagine you find a video of a head of state declaring war. How do you know if it’s the truth or not? This is the deepfakes. The emergence of a new kind of fake news.

Jordan Peele is a director and actor. He created a video in partnership with Buzzfeed where you can see a deepfake of Barack Obama insulting Donald Trump in order to warn Internet users against deepfakes, videos tampered by artificial intelligence.

The main risk is the instrumentalization of the information. The deepfakes raise fears of further information handling, particularly during electoral campaigns. Deepfakes are a threat to democracy, they could lead to the end of trust and truth and give rise to a society in which people will no longer believe in any informationsaid Francesco Paulo Marconi, manager of R&D at the Wall Street Journal.

We can imagine other dangers of deepfakes. For example, revenge porn, or pornographic vidéos. the technique has already been used in porn to produce fake videos using the faces of celebrities without their knowledge (Daisy Ridley, Rey in Star Wars was a victim of these kind of deepfakes.)

Recent technological progress allows anyone with a computer or smartphone to manipulate visual content. We have all heard about the mobile application Face app. The application generates very realistic transformations of faces on photographs using neural networks based on artificial intelligence and considered as the easiest and most popular hyper-translation creators available to the general public. Its speed and impressive results make it fun but dangerous.

How to prevent and fight deepfakes ?

Facebook and Amazon have announced the “Deepfake Detection Challenge”. The idea is to help the growth of machine learning to make it more capable of quickly detecting deepfakes. Twitter will introduce new regulations to fight deepfakes that could pose a threat to a person’s health. As a reminder, Twitter had already banned pornographic deepfakes since 2018.

Some details can help us to recognize a deepfake. We can look at the synchronization between the mouth and words, or the link between the face and the set. Furthermore, if the video has a bad quality, it can be a significant sign to detect a deepfake.

Howewer new deepfakes are using “adverse generative networks (GANs)” technology to assess their detectability even before they are published. In other words, they test themselves. As a result, it will be increasingly hard to detect a deepfake.

Nevertherless the deepfakes can be used for good reasons. The association “Solidarité Sida” has make a deepfake video. In this video we can see Donald Trump at his desk in front of a row of American flags. He says “I have some great news: today we have eradicated AIDS, Thank God. Thank God. Thank you Donald Trump. It is done. I took care of it personally”. In this fifty-second video, the American president asserts that “everything is over”.

We have to wait until the end of the sequence to discover the trickery: “This is a fake news”.

The association broadcast the montage of a fake speech by the American president to mobilize heads of state before the start of the Global Fund’s campaign against AIDS.

link of Barack Obama’s Deepfake :

link of Donald Trumps Deepfake :

A propos de Alain ONG

Étudiant en Master 2 - Commerce Électronique à la faculté de Droit de Strasbourg. Passionné des nouvelles technologies et du numérique. Je m'intéresse également aux sujets tels que l'intelligence artificielle, les crypto-monnaies ou encore le cybermarketing.

Vous aimerez aussi...

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.