Ever since being released last year, ChatGPT has been making headlines. Some organizations view the development as a potential cybersecurity tool others say it comes with cybersecurity risk.
Here, we talk to Kev Breen, Director of Cyber Threat Research at Immersive Labs.
Security magazine: What is your title and background?
Breen: I have always had an interest in technology ever since I was young. I joined the British Army at the age of 18 where I served for 15 years working with Communication Systems and deployable networks before defending Ministry of Defense (MOD) networks from cyberattacks and nation states.
Around this same time I started to publish research into the cybersecurity community, sharing knowledge and upsetting my fair share of malware authors that took to insulting me personally in their code.
After my time in the military, I went on to be a Principal CIRT Analyst at a high-tech player in aerospace, defense and security before I joined Immersive Labs. At Immersive Labs, I’ve led the team that creates practical labs that help organizations build and prove cyber capabilities, and now spend the majority of my time researching new and emerging threats and vulnerabilities to create practical hands-on environments allowing cyber leaders to test red and blue team skills against these threats.
Security magazine: How have you seen artificial intelligence (AI) impact the cybersecurity field in the past months?
Breen: If we are being honest with ourselves, we don't yet know anything new that we didn't recognize as a cyber threat before. But what’s new is that AI tools have become much more accessible and capable in recent days, catalyzing major changes to our society and cybersecurity posture.
Privacy has been top of mind for organizations worried about the impact of these AIs in the workplace. Samsung recently banned all employees from using it after fears that it was ingesting sensitive corporate data. Italy also took a whole country approach blocking OpenAI from its citizens with concerns over its data handling and GDPR. OpenAI opened the door to these large powerful, capable assistants and seems to have started a global technological arms race with large U.S. tech giants vying to be best in class, and now Alibaba and Baidu are launching their own services while Chinese regulators published draft rules for governing these powerful new tools.
Security magazine: ChatGPT has made headlines since it was released in November 2022. While some organizations have seen this as a potential cybersecurity tool, it and other artificial intelligence technologies come with cybersecurity risk. Can you give us some examples of how AI can be an asset and threat to cybersecurity?
Breen: The thing I love the most about large language models (LLM) like ChatGPT and newer entries, like Bard and Hugging Chat, is how they can reduce the barrier to entry for security teams who are dealing with complex problems. It’s similar to having an assistant alongside that understands and tries to help.
Some example of this in the cyber defense space include:
Interpreting code and logs — We often see attackers use Powershell scripts and command line tools as part of their compromise. Providing snippets of code to an LLM and asking it to explain what the code is doing is a great example of this. More senior analysts will be able to infer this themselves based on existing knowledge and experience, now juniors can benefit in the same way without disrupting their seniors.
As the AI is conversational and uses the history of each conversation to provide context, they can then do things like ask it to create a Splunk query to detect malicious Powershell being executed.
Intelligent Google search — The example here is that you have some raw log data you collected during an investigation, or maybe some web logs. You want to quickly filter the logs to count the number of IP addresses that have requested a specific file and order them. Most people would search online how to filter logs, find a stack overflow question that's kind of close, try it and then modify it to do the thing they want it to. This process can take several minutes, if you can find a sensible answer in the first place. By contrast, entering that same question, with an example log line into an AI tool, like ChatGPT, gives you the specific command you need to run in seconds.
Creating code — There has been a lot of industry chatter recently about how AI can write malware and other malicious code. Whilst this is true, it's not “quite true” in that it's not a simple case of “write me malware.” The AI has to be coerced into writing it in small parts that are then assembled into something larger and malicious. It’s far easier for attackers to search an open source code repository for existing malware and obfuscation methods.
Where I see this having the biggest impact is in the same way we use it as defenders to explain things and generate code for us. One example is it being useful in bug hunting and security research where you may be using a tool, like Burp, to intercept and modify traffic. You can give examples of those requests to an AI tool and ask it to explain the code or “create me a script that can modify and send these requests.”
Security magazine: What advice do you have for enterprise cybersecurity leaders in protecting their teams from emerging threats?
Breen: In terms of AI being used by organizations I have two pieces of advice: First, set up an acceptable usage policy for AI in your workplace that defines what, where and how it can be used or integrated keeping to stay on the right side of IP and data privacy. Second, generative AI tools are in service of the commands humans have developed for them, and they will often “make things up” to achieve that outcome. This is often referred to as “hallucinations,” so treat everything the AI tool says with a grain of salt.
Looking at the macro picture, AI is actually not the largest emerging threat enterprises face today. Attackers are looking more towards compromising the supply chain as a means to reach organizations with great perimeter security. Supply chain attackers have been successful when you look at high-profile incidents such as SUNBURST, Kaseya and 3CXDesktop to name a few recent incidents.
Responding to Zero Days and keeping pace with the attackers is critical. We have many past examples where attackers are quick to leverage these vulnerabilities within hours or days of their release. Log4j, Outlook NTLM and PaperCut are some of the more high-profile attacks where patching systems needed to happen quickly in response to these attacks.