Unmasking deepfake phishing: Protecting yourself from new tactics in the Age of AI

April 23, 2024 4 mins to read
Share

Phishing is not new, has been used to hack or infiltrate for decades. Today, everyone is trying to get in step with the technological evolution. Phishers are not an exclude from that. They are evolving their techniques too. Recently, a new tactic is on the surface, and this one is based on AI-generation. Deepfake phishing is a new tactic that adds to the range of challenges we will face from now on and of which we must be very aware.

Deepfake is an audio-visual manipulation based on AI technology. Different video deepfakes have been published in less than 10 years, however quite popular recently is audio deepfake. In recent years, different applications have been developed to produce deepfakes, video, or audio content. Despite the rapid development of artificial intelligence and the increase in the number of programs that enable Deepfake, there is a lack of programs/tools that help each of us protect ourselves from the possible manipulations that can be carried out thanks to deepfakes.

How deepfake phishing work?

Deepfake phishing can compromise our security both professionally and personally. Phishers can now use deepfakes to create fake images, videos, and audio. Programs that have been developed for entertainment, such as applications that show you at another age, gender, or race, have collected data that can produce deepfakes of anyone, not only of public figures as has been the case so far. Access to that data by phishers enables them to carry out the attack with the use of deepfakes to achieve their goal, which is to manipulate people and compromise their security. In addition to emails and messages, now video calls and voice messages have become sophisticated as a result of the involvement of technology and deepfakes in phishing. So, phishers use deepfakes during an online meeting by creating a person who does not exist, or using someone else’s face to manipulate, as happened in Hong Kong when a person pays out 25 million dollars after video call with deepfake ‘chief financial officer’ or in China when a person transferred over 600 thousand dollars without realizing that he is a victim of fraud. Deepfake phishing has also been used in audio messages and there are already programs that clone anyone’s voice, such as the program developed by Microsoft, VALL-E.

How to mitigate the risk of deepfake phishing?

In the absence of programs, when it comes to phishing, usually unusual requests for different amounts of money should raise suspicion. With phishing deepfakes, these requests can come out even more convincing, however, we all need to develop the ability of critical judgment and learn some of the ways to find anomalies in a deepfake phishing, they can be divided related to technology, but it is also important to go beyond technology.

Tech-related are:

  • jerky head,
  • lip sync inconsistencies,
  • torso movements, and
  • unusual audio cues.

Beyond technology, it is important to:

  • stay informed, as awareness could be the key to identifying potential threats.
  • be skeptical of any requests for sensitive information like passwords, credit card details, etc.
  • be cautious if something looks too good to be true or raises doubts

So, success in protecting against deepfakes phishing depends on us, on our interest in keeping up with technological developments, being well informed about innovations and risks, and on our care with personal or professional information that we share online with others.      The technology of deepfakes is a ‘fast train’ that will continue to bring to the world innovations that will amaze us, but at the same time also dangers that we must be very careful of. The refinement of deepfakes makes it an even more dangerous tool in the hands of fraudsters and manipulators. The difficulty in detecting it multiplies this fast-growing threat. In all this, institutions and organizations must work with staff to be aware of these innovations and risks, and be prepared not to believe a priori, regardless of videos, photographs, or audio messages that may be quite convincing, but always try to verify beforehand.