The term “deepfake” may sound like science fiction, conjuring images of robots and buzzing computers. However, the concept has long been with us: counterfeiting, for example, has long been a reality, and even spy movies of yore are peppered with fraudulent usage of technology the hero must overcome.
What then is the reality of deepfakes of today, and why should they concern us? The term, which combines “deep learning” and “fakes,” nods to an evolving landscape of computer-generated images and sound. At a time with a great deal of buzz around artificial intelligence (AI) transparency and fairness, deepfakes are a critical area for future planning. Natural language processing, for example, is growing by leaps and bounds — one of the richest areas of AI and a particular challenge when it comes to the evolution of deepfakes.
Today’s deepfakes, as outlined in my recent conversation with renowned business and technology futurist Bernard Marr, can be segmented into two camps. The first involves one-way expression from a person we recognize — for example, artificial video that seems to feature Tom Cruise. With some background about AI and machine generated images, deepfakes can be spotted easily. The tell-tale signs in this instance are many, from pixelated lips to poorly synced audio. Likewise, an artificial rendering of an individual we know personally has a high bar to meet to convince us. And because the content only moves in one direction, the implications tend to be relatively minimal.
More potentially ominous are deepfakes featuring someone completely fictitious that aim to extract information. This could involve computer-created images or voices — and because these deepfakes aren’t intended to sound or look like someone well-known, the bar is much lower to convince us they are real. If the deepfake’s audience can’t identify it as fabricated and that bot can manipulate them, there can be a real problem.
Fortunately, the channels themselves can help security leaders defend against such risks. Information such as how the internet is accessed by the target or the device or program used comes into play. Since the deepfake is transmitted in a digital environment, there is a data payload that provides cybersecurity professionals that insight. Many details around how a fraudster engages with their victim can provide clues. Some emerging technologies are also helping video makers authenticate their videos. For example, a cryptographic algorithm can be used to insert hashes at set intervals during the video. If the video in question is altered, the hashes will change.
But these are early days. In a few years, it will become increasingly difficult for enterprise security professionals and users to identify a deepfake. To navigate these challenges, there are many solutions that frame this risk and help protect people and businesses.
- Most important and most constant is vigilance: fraudsters are relentless and always at work, looking to take advantage of every loophole or weak spot.
- Good security procedures can also go a long way to thwart would-be fraudsters. Businesses can therefore fight fire with fire — leveraging these same capabilities (machine learning and advanced analytics, for example) to identify deepfakes.
- A layered strategy of defenses is also key, particularly as it relates to how fraudsters may try to distribute or deploy deepfakes. The threat landscape is constantly evolving, so there’s arguably no more important point of focus than guarding the front door.
Private industry is already motivated to figure out the challenges that lie around the corner with deepfakes, and legal frameworks and educational components will also help establish firm guardrails. But the most critical takeaway is that the nuances of this technology will continue to shift, but the core best practices should not. With awareness and vigilance, users and businesses alike can stay one step ahead of deepfake fraudsters.