A recent Beyond Identity survey analyzed how convincing generative artificial intelligence (AI) software ChatGPT was at tricking individuals. The respondents were asked to review different schemes and express whether they’d be susceptible — and if not, to identify the factors that aroused suspicion. Thirty-nine percent said they would fall victim to at least one of the phishing messages, 49% would be tricked into downloading a fake ChatGPT app and 13% have used AI to generate passwords.
As part of the survey, ChatGPT drafted phishing emails, texts and posts and respondents were asked to identify which were believable. Of the 39% that said they would fall victim to at least one of the options, the social media post scam (21%) and text message scam (15%) were most common. For those wary of all the messages, the top giveaways were suspicious links, strange requests and unusual amounts of money being requested.