Cyber threats are always evolving. Not only has artificial intelligence (AI) lowered the bar required to launch an attack, but adversaries are also using the technology to enhance the speed and sophistication of their tactics — through methods like code generation, password cracking, social engineering, phishing attempts and more. For example, hackers can use generative AI to build convincing profiles on social media platforms, making it easier for hackers to impersonate individuals or create fake personas for social engineering attacks.

Conversely, AI has also paved the way for a new era of enhanced cyber defense. Organizations can fight fire with fire by leveraging AI as a force multiplier for their threat intel and defense teams. AI can enable organizations to increase efficiency and more quickly identify and respond to threats by automating repetitive tasks, predicting behaviors, recommending response actions, developing detection, processing large datasets and continuously scanning for anomalies.

While AI is only as good as the data that powers it, data privacy and ethics should never be a trade-off for security. In order to wield the technology to their benefit, organizations must prioritize trust among all stakeholders. Cybersecurity professionals play a critical role in upholding that trust in their organizations by safeguarding against cyber threats, mitigating against bias and implementing transparent practices.


Protecting privacy and upholding transparency

Many organizations already have guardrails and guidelines in place to protect customer data and privacy, such as identifying trends and anomalies based on bulk datasets. By training models to identify anomalies like a surge in traffic or volume of login attempts, they can identify potential threat vectors without individuals’ data.

Companies should also be transparent with users about how their data is used and how long it is retained. Additional rules specific to generative AI should address the use and privacy of personal data for training future models, thereby safeguarding personal data within the AI ecosystem. More communication and transparency with users will build and strengthen trust among users and stakeholders.



By understanding the broader threat landscape and how other entities are managing them, organizations can learn how to better protect their own systems and data.”


Mitigating outdated and biased data

It’s not enough to ensure the data is protected — it must also be accurate. The biggest AI-powered security threats for an organization aren’t always from external adversaries. Outdated or incomplete data powering AI models can skew outcomes and lead to discrepancies, overlooking anomalies and introducing biases within AI models, all of which can significantly erode trust in an organization. For instance, biases in data could result in false positives, and incomplete data could limit visibility on viable security risks, predict incorrect recommendations for an analyst to follow, or fail to recognize an emerging threat altogether.

To help stay ahead of these possible pitfalls, human interaction and oversight is key. Organizations should conduct regular audits and assessments of their data to identify and resolve potential biases, such as datasets of any underrepresented or overrepresented groups. Cybersecurity professionals should then continually monitor the AI models and update them based on new insights.

Though AI can drive efficiency, cybersecurity teams have an in-depth understanding of a company’s assets and knowledge of the broader operational context that cannot be replicated by any AI model. Cybersecurity experts can help prevent incorrect AI outputs and stay ahead of AI-powered hacking methods by ensuring humans still play an integral role in their security efforts.


Collaboration is critical

As much as AI can do to supplement cybersecurity measures, one of the most effective ways to stay ahead of evolving hacking trends is collaboration. As technology and defense teams advance, so too are attackers finding ways to circumvent AI-based security measures. That’s why building relationships with other vendors, partners and government agencies is one of the most effective, proactive measures to help companies stay ahead of evolving AI-powered cyber threats. Responsible disclosure and information sharing is also a critical component of these relationships. By understanding the broader threat landscape and how other entities are managing them, organizations can learn how to better protect their own systems and data.

Companies can also build relationships with ethical hackers through bug bounty programs and live hacking events. These ethical hackers can help inform how security teams evade criminal hacker tactics — especially as they increasingly use AI.

Ultimately, AI is a double-edged sword for cybersecurity — security teams will use AI to improve their defenses against cyberattacks, but cybercriminals will also use AI to conduct evolved attack vectors and tactics. But in order for them to successfully evade threats, cybersecurity professionals must place trust at the center of their response strategy.