Generative AI is making a lot of headlines lately, with both its potential risks and uses. More and more security leaders are having to communicate generative AI-created malware, and other cyber risks, to the C-suite in their organizations.
Here, we talk to Carl Froggett, Chief Information Officer at Deep Instinct.
Tell me about your background in cybersecurity?
Before joining Deep Instinct as CIO in July 2022, I had the privilege of serving as the Head of Global Infrastructure Defense and Chief Information Security Officer (CISO) Cybersecurity Services at Citi. I was responsible for delivering comprehensive cyber risk reduction capabilities and services aligned with the architectural, business and CISO priorities spanning Citi's infrastructure, devices and networks across more than 120 countries. Since 1994, I've held various regional and global roles at Citi, covering all aspects of architecture, engineering and global operations. Now, I've turned my focus to leading and executing Deep Instinct's operational activities for infrastructure expansion and scaling internal systems, security and processes to keep pace with geographic expansion and strategic customer partnerships and alliances worldwide. I also serve as a trusted advisor to our partners and customers.
What strategies have you employed to communicate generative AI-created malware, and other cyber risks, to the C-suite?
If you raised the issue of AI or the impact generative AI will have on an organization's security posture just a year ago, it likely would not have resonated the way it does today. As a result of the recent events, business risk will rise and have an impact in the short term, never mind the longer outlook. In the past, C-suite executives have been flooded with data around the potential risk of AI, but it was not prioritized simply because other issues felt more urgent or pressing.
With generative AI becoming more mainstream, we are marking a significant shift where it has become imperative that C-suite leaders understand generative AI's potential to significantly increase the volume of unknown malware they must protect against seriously. This includes evaluating its broader implications on the business and implementing a plan to enable teams to respond accordingly.
In my experience, the C-suite is most focused on outcomes for the business as a whole. My recent conversations with C-suite have gone beyond cybersecurity to address the whole business impact and opportunities AI has to offer as well. I use various pieces of third-party research and data points that clearly define its potential, both positively and maliciously. This analysis helps C-suite members to fully understand the evolution of AI and where they should focus their attention.
Building off that, it's imperative to explain to the C-suite that this is not a simple framework we are dealing with. Almost anyone can use AI to generate advanced threat campaigns that are far superior to anything available today, and it can be accomplished at zero cost, with minimal expertise and at an extreme velocity.
Through these conversations, C-suite members need to ask their security teams follow-up questions about the effectiveness of the company's defensive posture and the plans to combat this new AI threat. From there, security professionals should acknowledge that further research and investment is needed, and the need for a preventative approach is essential.
How were those strategies received? What made those communication efforts successful?
In my experience, starting the conversation from a business risk and opportunity perspective, along with using reliable media as a source of information, has been well received. However, it is crucial to be prepared for ad-hoc discussions, as the topic of AI often expands into other areas of concerns like, how it can provide a potential competitive edge.
Showing a greater understanding of how AI's impact can extend to other business areas, both positively and negatively, builds trust and confidence in your ability to understand the whole business and the valuable contributions you can bring beyond your role.
Can you describe your approach to identifying if malware attacks are using generative AI?
My first response: does it even matter? Security teams today strive to pinpoint malware or cyberattacks to specific bad actors. The advancement of AI presents a challenge as it is becoming increasingly weaponized and accessible to any threat actor despite their level of expertise.
It makes me wonder whether it will be possible to accurately trace the specific technologies used to build malware. Security Operations Centers (SOC) face this challenge mainly because the same AI technology used to detect malware can be manipulated to create outdated, deceptive versions. Because of its simple nature, AI's ability to disguise itself makes it difficult to identify whether or not the attack is in fact powered by generative AI. What’s more, as malware campaigns deploy new AI-generated threats and rapidly generate different types, there's a chance that all aspects of the kill chain will be leveraged using these innovations. From surveillance and delivery to the final weaponization, AI will play a pivotal role throughout each step to evade controls making detection of the threat much more difficult by today’s security solutions.
This monumental shift is what people are gradually beginning to understand. It is not just a simple industry change. It's a complete transformation to the threat landscape as we know it, and our traditional cyber best practices will no longer be relevant. The rise in AI highlights the need for new predictive preventative measures to navigate the changing times. Harnessing the power of advanced AI to combat advanced AI itself is the optimal approach for reaching success in today's evolving threat landscape. Fight AI with AI.
How can generative AI act as a language translator for overseas hacking groups?
Will it help overseas hacking groups? Yes, but language is not what is holding them back. Generative AI is much more than just a mere language-translation tool. The opportunity for bad actors to harness these technologies is growing every day, revealing our current understanding of AI's capabilities within the established threat landscape to be limited.
One example can be seen in viral clips of well-known celebrities showcasing hyper-realistic movie stunts and situations they did not appear in. These audio and video snippets are made to include individuals in completely AI-generated situations. This is more than just a translation tool - it mimics nuanced details and mannerisms that help threats go unnoticed.
Spearfishing, another trending cyberattack method, will leverage this even further, as the impersonation of an executive will read and feel just like the person. This method can augment an audio call entirely generated from AI to reinforce the pressure on the target to perform the act.
How do you see generative AI affecting cyber risk in the future?
Everything we know about AI in cybersecurity is on a path of change. The standard frameworks we trust today are no longer the right tools in this ongoing conversation. Bad actors will leverage generative AI to quickly adapt and bypass existing defenses, increasing success rates for methods like phishing and impersonation attacks, and that adaptation to defensive responses will continue in real time using generative AI.
On the other hand, AI technologies will also be used to mitigate risks, primarily through intelligent technologies like deep learning (DL) — the most advanced form of AI. What makes DL effective is its ability to self-learn as it ingests data and work autonomously without human intervention to prevent complicated threats and do so at a speed and scale to stop attacks before they can get inside. DL allows leaders to shift from the traditionally assumed breach mentality to a predictive prevention approach to combat AI-generated malware effectively. This approach means organizations can prevent threats before they even happen. By adopting this perspective, leaders can stay ahead of the rapidly evolving threat landscape, enhance their security posture, and prevent attacks, reducing their SOC burden and lowering costs of response and remediation.
As cybersecurity professionals face the mounting pressures of increased cyber threats, embracing a prevention-first stance can also aid more significant issues like industry-wide burnout and the cyber skills gap. Today more than 90% of security professionals are stressed, and 46% have considered quitting. They can’t keep up with the current volume of alerts and spend their time chasing down false flags, worried they will miss the active threat on the network. Innovative cybersecurity solutions powered by deep learning lower risk by actually preventing zero-days before they can get inside and significantly reduce the noise. Empowering security professionals to better spend their time leveraging their expertise with more time and energy to devote to what will help the business like analyzing the threat landscape and patching vulnerabilities — the most critical aspects of their roles.
Shifting towards a more adaptable preventive approach, discarding conventional reactive methods, and embracing the new era of AI is vital to combat potential cyber risks in the tremulous threat landscape.