Blog

Deepfakes: the cyberthreat that impersonates identities

Written by iDISC Information Technologies | Feb 23, 2024 3:22:00 PM

Today, most of us have several devices we use to connect to the Internet at any time and from anywhere. However, the trust we place in these tools exposes us to multiple threats.

Do you know how to protect yourself from the most sophisticated cyberthreats? Fraud and cyberattacks have been on the rise for some years now. Deepfake technology is now becoming one of the most dangerous scams on the planet.

What are deepfakes? 

The term deepfake is a portmanteau of deep learning, in reference to artificial intelligence, and fake. Cybercriminals manipulate audio-visual content using artificial intelligence and create fake but realistic copies of people, imitating their voice, gestures, facial features and even the way they speak. This enables them to trick victims into believing a lie.

Cybercriminals’ minds are relentless, and the realism of their fakes is improving all the time. For example, a deepfake can capture the face of a famous billionaire, such as Carlos Slim or Ricardo Salinas Pliego, and appear to recommend a miracle investment, something that is already happening in Mexico. Interesting, right? But there are other deepfakes that are much more dangerous, and here we tell you how to detect them.

This strategy uses audio and video manipulated with surprisingly realistic generative AI. It makes viewers believe that what they are seeing, hearing, or read is true, even if the situations are false or distorted. This is done by using generative adversarial networks (GANs) that generate synthetic elements that look similar to authentic data. Some of the main forms of deepfakes are:

  • Impersonation of family members in emergency situations, creating a sense of urgency and asking victims for money.
  • Hacking of personal social media accounts to promote fake investments. These profiles are used to spread misleading links, leading recipients to click and fall for the scam.
  • Phishing tactics to manipulate companies’ passwords or encryption systems to gain access to their corporate accounts. Through forged emails, they can steal confidential information such as bank details or other sensitive data.

The first deepfakes emerged in 2017 and have been constantly evolving ever since. They have now become a growing threat to organisations’ security.

In 2021, the FBI issued a warning about this, alerting the public to the falsification of real people’s voices and images. The success of most cyberattacks lies in the unfamiliarity with such convincing techniques. 

How to detect a deepfake

This cybercrime is expected to increase as the methods and tools become more sophisticated. This phenomenon will become increasingly damaging, with the potential to invade operating systems and obtain personal or financial information. This can be of particular concern in work environments where people use platforms such as Slack or Microsoft Teams.

However, there are key elements that allow us to determine the authenticity of what we see or hear, and to differentiate between possible deepfake fraud and legitimate content. For example:

  • Unsynchronised lip movement and audio: this is a key element in detecting deepfakes, so paying close attention to this detail is imperative. Sometimes, it is even easy to detect anomalies in the playback of audio that is out of sync with lip movement or vice versa. 
  • Obvious changes in lighting: sometimes, visual inconsistencies occur due to the merging of images with different lighting.
  • Robotic facial movements and strange expressions: facial expressions are decisive in assessing the legitimacy of audiovisual content. If we see unnatural expressions, there may have been some manipulation. For example, strange or irregular blinking or simply the complete absence of such a movement is suspicious
  • Incoherence or contrived speech: it is essential to analyse the context of the message and to assess the coherence, cohesion, and fluency of what we are hearing. A “metallic” voice may indicate manipulation

How to defend ourselves?

These threats are expected to continue to develop and increase in scope. Our lack of knowledge about this technology is the main reason for its success. Therefore, it is vital that organizstions implement preventive measures in order to mitigate the risks of these cybercrimes and reduce their vulnerability to deepfakes. Some basic preventive measures include:

  • Applying strong and secure password policies that include a variety of letters, numbers, and special characters.
  • Keeping all operating systems, software, and applications up to date with the latest security versions. Anti-virus and anti-malware software must also be installed and kept up to date on all devices connected to the corporate network.
  • Implementing awareness campaigns and training for employees on the growing threats of deepfakes: workers and collaborators are the best defensive barrier if they are critical, aware, and careful with the content and links they receive.

In conclusion, cybercriminals are leveraging the latest technologies to attack organisations, so it is crucial that organisations invest in AI-enhanced cyberthreat prevention and protection mechanisms. These are just some of the measures that can help us prevent, detect, and respond efficiently to potential cybercrime and reduce sophisticated social engineering fraud.

We hope you find this information useful. We will be happy to provide you with more details on company security against emerging cyberthreats in 2024. Do not hesitate to contact us or visit our blog for more information.