Artificial intelligence (AI) and machine learning (ML) were integral components to cybersecurity well before the generative AI era. These technologies have optimized tasks like malware detection, phishing prevention and the automation of security operations centers (SOCs). Due to the hype around AI tools and the introduction of enterprise versions, the use of these solutions will likely see a significant increase in the near future. With that in mind, let’s delve into the benefits and limitations of five practical examples using generative AI within the realm of incident response. 

Enhance threat hunting 

A critical task for cybersecurity defenders is to identify indicators of compromise (IoCs). Technologies like generative AI show promise in expediting this task by mimicking human text analysis at an accelerated pace. A specialized script can assist in this endeavor, extracting security event logs and running processes from target systems. Through requests to the OpenAI API, metadata is evaluated for potential compromised data. 

However, certain limitations exist that hinder complete automation requiring human participation and careful usage considerations. When the initial excitement around generative AI emerged, it was important to evaluate its efficiency in IoC detection. The results revealed that out of 3,577 security events, 74 were classified as malicious, with17 marked as false positives. Yet, this method isn't entirely foolproof. It raises cost concerns and has legal implications associated with transmitting sensitive data with OpenAI.

Facilitate reverse engineering

Reverse engineering plays a pivotal role in aiding defenders to comprehend the nature of the threat and the distinct characteristics of the malware. In this context, GPT-like technologies prove valuable, succinctly describing code functionalities and aiding in identifying malicious code samples and their capabilities. This support can accelerate threat intelligence endeavors

While using generative AI for reverse engineering yields insights, it's not without drawbacks. Its challenges in handling intricate code and legal concerns emphasize the need to pair its utilization with human expertise for precise and secure analysis. Certainly, outputs should undergo validation to ensure reliability. Furthermore, as with any other scenario, one must exercise caution when sharing potentially sensitive code with a chatbot, as this could result in privacy concerns.

Polish instructions and report writing quality

Going beyond applications in threat detection and reverse engineering, generative AI can also lend its support to tasks that are frequently handled by a diverse range of security professionals, from analysts to CISOs. This task is text composition. Furthermore, given that cybersecurity is a genuinely global community, most communication occurs in languages other than one's native tongue. In this context, generative AI can undoubtedly assist in transforming initial drafts into well-structured, articulate and coherent business language.

Security leaders can have generative AI review the final text, removing all confidential details. A well-crafted prompt should include these key points:

  • Instruct the tool on who it should emulate, for example, telling it to mimic an American English native speaker who is a high-level cybersecurity executive.
  • Specify the task, like proofreading a report. Security leaders can also ask the model to enhance clarity, coherence and flow, using active verbs and tech-savvy language.
  • To prevent any unexpected content, security leaders can direct the tool not to introduce facts not included in the original text.

Remember to cross-check the resulting text. The main downside is that the proofread texts from generative AI need careful double-checking, as the tool tends to omit facts from the original text, add new ones, alter meaning and even make factual errors.

Expedite threat analysis scripts development 

In cybersecurity, various types of scripts are used to search for and identify threats. Some commonly-used scripting languages for cybersecurity purposes include Python, Bash, PowerShell and Perl. Yet, grappling with the intricate syntax of the language and the challenges of debugging can be both taxing and time-consuming. This is where generative AI helps, enabling swift script creation that is tailored to a task, while decreasing the need for deep knowledge and expertise in specific areas, such as Linux tools (awk, sed, grep), and regular expression syntax. Additionally, if there is a need for the script to work on older systems, it can be written in POSIX standard to ensure compatibility. To fully harness generative AI's capabilities, the instructions must be clear. Specify the task, such as writhing Bash script finding and parsing cron jobs, along with defining the output and contextual details. 

While generative AI offers convenience when writing scripts, apart from the standard privacy concern limitations include the potential oversight of complex scenarios. The output generated by generative AI is limited in terms of the number of words it can produce. Therefore, an analyst may need to manually combine different segments of the code received from generative AI, which can be a challenging task. Once the script is generated, give it a thorough double-check and run a test to ensure accuracy. Refine as needed to fine-tune the outcome. 

Accelerate remediation scripts creation 

A significant part of the incident life cycle consists of the remediation phase. During this stage cybersecurity professionals may, for example, remove malicious files or block their execution, isolate hosts on the network, disable user accounts and do other tasks to heal the affected system. Scripts are needed for this, and generative AI can help with them just as it does with scripts for the analysis phase. The AI tool accelerates the script development process, hastening the removal of malware that impacts business. Additionally, it facilitates the process by providing examples and functions of code, improving the work flow for junior analysts.

Certainly, these types of scripts should also be approached with caution. The need to share IoCs with generative AI raises privacy concerns. Moreover, if the output script turns out to be erroneous — and the likelihood of that is quite significant — there is a risk of deleting essential system components and severely disrupting its operation. Prior to implementing these scripts, it's crucial to thoroughly verify the output from the AI tool and ensure there is a backup of data. 

In conclusion, it's important to note that, while AI tools can handle basic tasks with some creativity, they are not ready to completely take them over yet, particularly in complex fields like cybersecurity. Real cybersecurity tasks require the precision, speed and reliability that only experienced human experts can provide at this time, hence the demand for services in which human experts uncover and neutralize malware that has evaded a company’s existing security solutions.