Imagine getting a frantic voice or video call from a familiar source. There’s an emergency. They request something dramatic like approval for a huge invoice, sending sensitive files or take assets offline. If this were a phishing email, someone might dismiss it. But when it’s a familiar voice or face, how hard would someone try to verify it’s legit? What if it turned out to be an artificial intelligence (AI)-fueled scam?
Whether firms adopt generative AI (GenAI) or not, hackers and security researchers are already exploring how to abuse it to attack anyone. Specifically, security leaders observe nine cyber threats that GenAI will amplify. They fall into one or more of three overlapping types: attacks with AI, attacks on AI or erring with AI. All told, there will be more things to attack, more ways to attack them (or trick people) and attacks will become easier and more damaging — at least initially.