Audio Deepfakes and AI-tricks threatens elections around the World

April 8, 2024 5 mins to read
Share

Nowadays, almost anybody can create audio or video recordings of celebrities or regular people acting and talking in ways they never would have by using simple, widely accessible artificial intelligence software. The rights of the persons whose voices and images are being appropriated will need to be more seriously protected. Big tech-companies have teamed up in a historic attempt to stop the misuse application of AI in the upcoming elections throughout the world.

Deepfakes continue to develop and each time with better quality, more convincing, and closer to reality. The large number of elections around the world during 2024 raises with it the concern about the inclusion of artificial intelligence in these electoral processes, which may compromise the integrity of the elections. Voter manipulation by deepfakes is one of the main discussions in many countries of the world that are preparing for elections. About 4 billion people will turn to the ballot boxes in over 50 different countries. Concerns have been expressed by academics, journalists, and politicians over the use of AI-generated content in political influence operations. Nonetheless, AI-generated content will also have greater impact in our social life. Recently, viral cases are related to celebrities, but given how fast deepfakes are evolving, we will have deepfake videos of regular people, who are not celebrities or politicians, and do not arouse public interest with their jobs or activities. This will be a very serious threat for societies and that is why it is very important to have collective initiatives against AI-generated trickeries.

Case studies of recent deepfakes

Deepfakes, or non-AI based manipulations, cheapfakes, are not new and have been around for a while. However, with ChatGPT’s impact on bringing AI to a wider audience, in last year, billions of dollars have been invested in AI companies. The development of programs that facilitate their production, especially deepfakes, has multiplied the use of artificial intelligence to produce deepfakes that target the public. Even now, in addition to video manipulation, there have been cases where audio deepfake is produced, which is even easier to produce. The case of audio deepfake of the US President, Joe Biden, that was distributed in New Hampshire, urging people not to vote in the state’s primary, reached more than 20,000 people. The person who is behind this manipulation and paid $150 to produce it, Steve Kramer, stated that he did it as an act of civil disobedience to draw attention to the risks associated with artificial intelligence in politics, and to draw attention to the necessity of AI regulations.

Another big example which could have serious political and societal implications is the audio deepfake of London Mayor, Sadiq Khan. Early November of 2023, an audio deepfake of Khan went viral when he appears insulting Armistice Day (a celebration to mark the moment when World War One ended) and demanding that pro-Palestine marches take precedence.

In addition to audio deepfakes with political protagonists, video deepfakes with celebrities continue to circulate in the Internet sphere, such as the cases with video deepfakes of the famous actor, Tom Hanks, who promoted dental plan or the American Famous YouTuber, MrBeast, who appeared to be hosting “the world’s largest iPhone 15 giveaway”. Image deepfakes of the singer Taylor Swift that were published at the beginning of this year on several social media – X (formerly Twitter), Instagram, Facebook, and Reddit – also became viral recently. Before it was taken down from X, an image deepfake of Swift was viewed over 45 million times in the roughly 17 hours that it was up on X.

A Collective Initiative against AI-generated trickery

The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” which was announced at the Munich Security Conference, sees 20 major players, including Adobe, Google, Microsoft, OpenAI, Snap Inc., and Meta, committing to use cutting-edge technology to detect and counteract harmful AI-generated content that aims to mislead voters, and also to support efforts to foster public awareness, media literacy, and all-of-society resilience. It is the first time that 20 different companies getting on board together against AI-generated trickery.

Participating companies agreed to eight specific commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate.
  • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content.
  • Seeking to detect the distribution of this content on their platforms.
  • Seeking to appropriately address this content detected on their platforms.
  • Fostering cross-industry resilience to Deceptive AI Election Content.
  • Providing transparency to the public regarding how the company addresses it.
  • Continuing to engage with a diverse set of global civil society organizations, academics.
  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience.

This initiative between the tech sector and AI aims to target picture, video, and audio created by AI that might mislead voters regarding candidates, election officials, and the voting process. However, it does not demand that such content is banned completely.