Security Leaders Discuss Marco Rubio AI Imposter

Gilles Lambert via Unsplash
Secretary of State Marco Rubio was recently impersonated via text messages and AI voice messages sent to a United States governor, a member of Congress and foreign ministers. The imposter reportedly mimicked Rubio’s voice and writing patterns through an AI-powered software in a probable attempt to manipulate targets.
At this time, it is unclear who is behind these impersonation attempts, although it is believed that the goal was to access information or accounts.
Below, security leaders discuss the implications of this campaign.
Security Leaders Weigh In
Thomas Richards, Infrastructure Security Practice Director at Black Duck:
This impersonation is alarming and highlights just how sophisticated generative AI tools have become. The imposter was able to use publicly available information to create realistic messages. While this was, so far, only used to impersonate one government official, it underscores the risk of generative AI tools being used to manipulate and to conduct fraud. The old software world is gone, giving way to a new set of truths defined by AI and global software regulations; as such, the tools to do this are widely available and should start to come under some government regulation to curtail the threat.
Margaret Cunningham, Director, Security & AI Strategy at Darktrace:
Although the impersonation attempt of Marco Rubio was ultimately unsuccessful, it demonstrates just how easily generative AI can be used to launch credible, targeted social engineering attacks. This threat didn’t fail because it was poorly crafted — it failed because it missed the right moment of human vulnerability. People often don’t make decisions in calm, focused conditions. They respond while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.
The use of generative AI to create deepfake audio, imagery and video is an increasing concern. While media manipulation isn’t new, AI has dramatically lowered the barrier to entry and accelerated both the speed and realism of production. What once required significant time and technical skill can now be done quickly, cheaply, and at scale — making these tactics accessible to a far wider range of threat actors.
This underscores a shifting threat landscape: trust signals like names, voices, and platforms have become part of the attack surface. As AI tools become more powerful and accessible, attackers will continue testing these weak points. We can’t expect people to be the last line of defense. Security strategies must evolve to reflect how decisions are made in the real world, and technology must be at the center of defending against these threats, especially to keep pace with a problem that is moving at machine speed.
Trey Ford, Chief Information Security Officer at Bugcrowd:
Whether you receive inbound email, phone calls, text, or snail-mail (all of which is spam, or could be phishing) — the question we have to ask is: “who is this from?”. This challenge of authenticity is the notion of identity proofing, which is the process of verifying a person’s claimed identity by collecting and validating evidence of their identity.
Around election time (at least in the U.S.) we all receive messages claiming to be from candidates. Asking “is this real?” is a healthy, natural response. Celebrities, executives, and public figures will be more prone to having their identity faked — the cost and efficacy of fabricating a compelling synthetic, adopted identity is both cheaper, and easier with the advent of generative AI.
When receiving unexpected communications from an unknown individual, or from an expected entity over an unexpected communications channel, the process of identity proofing before taking any action is prudent.
Alex Quilici, CEO at YouMail:
If AI can fool senators, government officials, and foreign ministers just by mimicking a well-known voice, imagine what it could do to everyday consumers. Tools like Live Voicemail actually open the door wider (risk more) for these scams. What stands out here is that it’s messaging-based, not a live call. Short, AI-generated voice clips are easy to pull off today. Longer back-and-forth conversations are tougher, but increasingly within reach. Fooling someone with short voice messages is fairly easy given the current state of AI, however, keeping up longer interactive conversations is still harder, though it might still be possible.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!






