Technological Sophistication and the Difficulty of Detecting Deepfakes: Every Second Respondent Is Deceived by Two Deepfakes in the Experiment

July 6, 2024 5 mins to read
Share

Deepfakes are a form of audio-visual manipulation through artificial intelligence. Recent developments in this field have increased the curiosity to understand more about deepfakes. The literature on deepfakes has multiplied in recent years.

The Google search trend for deepfakes varies, but there has been an increase in the last two years. This has been mostly influenced by two factors: the massification of artificial intelligence and the increased focus on the manipulations that can be done through artificial intelligence. The search trend shown in the figure below is on the 0-100 scale that Google uses to indicate trends. The higher the number, the more popular the term. In this case, it is observed that, since summer 2022 onwards, the search value has been 35 or higher. This means that the term has been at least semi-popular around the world, reaching its peak of popularity in the early 2023.

Figure 1: The worldwide frequency of ‘deepfakes’ (2018-2023). Source: Google Trends, accessed November 28, 2023.

The country with the most Google searches for deepfakes is South Korea, followed by Brunei and St. Helena. China is number four with the Philippines rounding out the top five.

65% of respondents have heard of deepfakes

An experiment was conducted among students from various fields in Kosovo and the United States. The experiment consisted of two parts: testing participants’ understanding of deepfakes and their ability to distinguish deepfake content from real content. Of the 68 participants (55 in Kosovo and 13 in the US), 65.3% indicated that they are familiar with the term ‘deepfakes’. Most of these participants also indicated that they know the meaning of deepfakes, while 15.3% confused deepfakes with cheapfakes. The latter is also a form of audiovisual manipulation, but it is carried out without the involvement of artificial intelligence.

Two out of six deepfakes deceive over 50% of respondents

Participants in the survey were shown six different videos, featuring various personalities from world politics (American President Former American President Russian President), then famous Hollywood actors ( and ) and the renowned model. All these videos were deepfakes, produced at different times and for different purposes. The deepfake video of former President Trump is notable for being the first deepfake officially used by a political party, having been published on the Facebook page of the Flemish Socialist Party in Belgium in 2018. The most recent deepfake among these six examples was the one featuring Bella Hadid, which appeared to support.

Each respondent was asked to judge the authenticity of the videos by indicating whether they believed the content to be true or false, and to what extent they were confident in their answers. Additionally, respondents identified the characteristics that led them to believe the content was either a deepfake or real and unmanipulated.

The survey included deepfakes of varying qualities. Of the six videos used, respondents were most deceived by the ones with actors Tom Cruz and Morgan Freeman. These two deepfakes are among the best-crafted in recent years, making them especially difficult to discern from real content.

Technological quality, the key factor in detecting deepfakes

Some of the recurring reasons given for detecting deepfakes relate to the design or technology used to create them.. Respondents noticed something unnatural in the mouth, lips or eyes. Comments included a lack of synchronization of lip movement, different voice timbre, a mismatch between the voice and the image, an unnatural position of the eyes and strange movement of the chin. Meanwhile, those who were deceived into believing that the deepfakes with Tom Cruz and Morgan Freeman were real videos, mentioned reasons related to the natural movement of the body, eyes, cheekbones or lips. They also believed that the voice had not been generated by artificial intelligence. So, technological quality is the primary factor everyone relies on when evaluating a video. In about 15% of cases, respondents identified a video as a deepfake based on its content, sentences, or the language used. In the case of the Freeman and Bella Hadid deepfakes, almost half of the respondents rated the videos as manipulated because they had seen them before and knew they were not real. Meanwhile, over 25% noticed deepfakes from gestures and facial expressions.

Commitment issues to programs that detect deepfakes

The results of this survey indicate that in some cases, every second participant is deceived and fails to recognize deepfakes. This reiterates the need for programs or detection tools to help people verify content. The rapid development of artificial intelligence is making it one of the main tools for societal functions. All over the world, in any anti-democratic trend, artificial intelligence is likely to be utilized. The gap between tools that create deepfakes easily and those that can detect them has widened recently. This gap needs to be addressed through enhanced efforts from leading companies in the field of artificial intelligence, global or regional organizations, as well as governmental institutions responsible for information and technology. Unfortunately, such efforts are currently lacking.