Conversations around deepfakes and disinformation campaigns are heating up, especially in the lead up to this year’s U.S. election. Put simply, deepfakes are AI-generated fake videos or audio recordings that look and sound like the real thing. They leverage powerful techniques from machine learning (ML) and artificial intelligence (AI) called deep learning to manipulate or create visual and audio content in order to deceive people.

During an election season, this could create chaos among voters. Cybercriminals could, for example, use deepfakes to create fake news and swap out the messages delivered by trusted voices like government officials or journalists to trick them into believing something that isn’t real. In fact, an overwhelming 76% of IT leaders believe deepfakes will be used as part of disinformation campaigns in the election.

But deepfakes don’t just threaten the security of elections. A survey by Tessian also revealed that 74% of IT leaders think deepfakes are a threat to their organization's security.

So, how deepfakes could compromise your company’s security?

“Hacking humans” is a tried and tested method of attack used by cybercriminals to breach companies’ security, access valuable information and systems, and steal large sums of money. And hackers are getting better at it by using more advanced and sophisticated techniques.

Social engineering scams and targeted spear phishing attacks, for example, are fast becoming a persistent threat for businesses. Hackers are successfully impersonating senior executives, third-party suppliers, or other trusted authorities in emails, building rapport over time and deceiving their victims. In fact, last year alone, scammers made nearly $1.8 billion through Business Email Compromise attacks. These types of spear phishing attacks are much more effective, and have a much higher ROI, than the “spray and pray” phishing campaigns criminals previously relied on.

Deepfakes, either as videos or audio recordings, are the next iteration of advanced impersonation techniques that bad actors can use to abuse trust and manipulate people into complying with their requests.

It’s a threat we are already seeing today. In 2019, the CEO of a large energy firm was scammed by fraudsters who impersonated his boss over the phone and requested the fraudulent transfer of £220,000 to a supplier. Similarly, Twitter employees were targeted by a “phone spear phishing attack” earlier this year whereby hackers posed as IT staff, tricking people into sharing passwords for internal tools and systems. If an employee believes that the person on a video call is the real deal, or if the person calling them is their CEO or IT manager, it’s unlikely that they would ignore or question the request.

How can you protect your business from the threat?

To some degree, today’s deepfakes are quite easy to spot. In the poorly-made video deepfakes, you can see that people’s lips are out of sync, the speaker isn’t blinking, and there may even be a flicker on the screen. However, as deepfake technology continues to get better, faster and cheaper, it’s likely more hackers will start using the software to further advance their impersonation scams.

Training and awareness is an incredibly important first step in combating the threat. It’s, therefore, encouraging to see that 61% of leaders are already educating their employees on the threat of deepfakes and another 27% have plans to do so. Remind your staff to pause and verify the request with a colleague via another channel of communication, before carrying out the request. Advise your employees to ask the person requesting an action something only you and they would know, to verify their identity too. For example, they could ask them what their partner’s name is or what the office dog is called.

Then, identify who might be most vulnerable to impersonation scams and tailor training accordingly. New joiners, for example, have likely never met or spoken to senior executives in their organization and would not have any reference points to verify whether the person calling them is real or fake -— or if the request is even legitimate. Attackers could also target new joiners by pretending to be someone from the IT or security team, carrying out a routine set-up exercise. This would be an opportune time to ask their targets to share account credentials. Remember, hackers will do their homework and trawl through LinkedIn to find new members of staff.

Lastly, invest in AI solutions to detect the threat. Ironically, AI is one of the most powerful tools we have to combat AI-generated attacks. AI can understand patterns and automatically detect unusual patterns and anomalies, such as impersonations, faster and more accurately than a human can.

When I recently spoke to deepfake expert Nina Schick, she said that deepfakes are “not an emerging threat. This threat is here. Now.” Businesses need to take action and find ways to proactively protect their people from advanced impersonation attacks, whether they are sophisticated spear phishing attacks or deepfakes. Training and awareness, clear policies and procedures around authenticating and approving requests, and AI-powered security solutions can help to ensure these scams aren’t successful.