New artificial intelligence (AI) technologies have raised a number of security concerns. AI can make it easier to commit fraud, as artificially created audio and video becomes increasingly believable.
A survey conducted by Regula analyzed how security leaders feel about the threat of deepfakes. According to the survey, fake biometric artifacts like deepfake voice or video are perceived as real threats by 80% of companies. Businesses in the U.S. seem to be the most concerned, with about 91% of organizations consider it to be a growing threat.
At the same time, advanced identity fraud is not only about AI-generated fakes. According to the survey, nearly half of the organizations globally (46%) experienced synthetic identity fraud in the past year. Also known as “Frankenstein” identity, this is a type of scam where criminals combine real and fake ID information to create new and artificial identities. It’s usually used to open bank accounts or make fraudulent purchases.
The banking sector is the most vulnerable to such kind of identity fraud. Nearly all the companies in the industry (92%) surveyed perceive synthetic fraud as a real threat, and almost half (49%) have recently come across this scam.