Artificial Intelligence has been around for decades yet the innovation around it has skyrocketed over the past couple of years. The most recent advancements in generative AI make the technology accessible and approachable for businesses and consumers, in ways we’ve never seen before. It can now help create efficiencies in the workplace that will increase productivity, improve internal operations and enhance creativity. 

Yet, the evolution of large language models and the use of generative AI can open doors for fraudsters in unprecedented ways. Generative AI gives fraudsters new avenues to deceive businesses and consumers alike. From creating personalized and convincing messages that are tailored to their victim, to analyzing public social media profiles and other personal information to create fake accounts, it’s more difficult to distinguish what’s real from what’s fake. 

Fraud continues to be a growing concern. Experian’s 2023 Identity and Fraud Report found that 52% of consumers feel like they’re more of a target for online fraud than they were a year ago while over 50% of businesses overall report a high level of concern about fraud risk. As businesses continue to develop and evolve their fraud strategies, it’s crucial they are aware of the ways generative AI is being used to commit fraud and how they can use it to fight fraud. 

Different types of AI-enabled fraud

Generative AI is enabling fraudsters to automate previously time-consuming and complex processes of stitching together fake, synthetic identities that interact like a human across thousands of digital touchpoints, fooling businesses or consumers into thinking they are legitimate. Below are a few schemes to look out for: 

  • Text messages: There are two lines of attack coming from texts. First, generative AI enables fraudsters to replicate personal exchanges with someone a victim knows with well-written scripts that appear authentic and are very difficult to discern as fake. Further complicating matters, bad actors can conduct multi‑pronged attacks via text-based conversations with multiple victims at once, manipulating them into carrying out actions that can involve transfers of money, goods, or other fraudulent gains.
  • Fake video or images: Bad actors can train AI models with deep-learning techniques to use very large amounts of digital assets like photos, images and videos to produce high‑quality, authentic videos or images that are virtually indiscernible from the real ones. Once trained, AI models can blend and superimpose images onto other images and within video content at alarming speed. More concerning, AI-based text-to-image generators enable fraudsters with little to no design or video-production skills to perform these actions. These AI tools work so quickly that they dramatically increase the effectiveness of fraud attacks.
  • “Human” voice: Perhaps the scariest of the new methods at a fraudster’s disposal is the growth of AI-generated voices that mimic real people. This fraud scheme has created a wide range of new risks for consumers who can be easily convinced they are speaking to someone they know as well as businesses that use voice verification systems for different applications such as identity recognition and customer support. 
  • Chatbots: Bad actors use friendly, convincing AI chatbots to build relationships with victims with the ultimate goal to convince them to send money or share personal information. Following a prescribed script, these chatbots can extend a human-like conversation with a victim over long periods of time to deepen an emotional connection.

Fighting AI with AI

To combat these threats now and in the future, companies should leverage advanced technology, like machine learning and AI, to better support and protect their businesses to stay one step ahead of fraudsters. 

Generative AI can be used to fight and prevent fraud by analyzing patterns in data and identifying potential risk factors so companies can spot early indicators of potentially fraudulent behavior. Synthetic data created by generative AI can be used to speed the development and testing of new fraud detection models. It can also help investigate suspicious activity by generating scenarios and identifying potential fraud risk. 

Using advanced technology like machine learning is also an important part of combatting this fraud. In fact, Experian’s report found that 90% of businesses that leverage machine learning reported a high level of confidence in their effectiveness at fraud detection and prevention. It has several benefits including helping businesses detect and prevent fraud in real time, analyzing large quantities of transactions and data sets so that fraud risks are identified quickly without hindering the customer experience, and evolving fraud prevention strategies over time.  

As AI-driven fraud increases in sophistication and frequency, businesses will need to incorporate more modern fraud detection solutions. A multilayered approach, supported by a flexible and extendable orchestration platform that leverages data, AI, machine learning and advanced analytics will help businesses stay ahead of criminals and keep themselves and their customers safe and aware of evolving fraud threats. By building a long‑term defense strategy against AI-enabled attackers, companies will safeguard themselves and their customers from potential risks.