How Artificial Intelligence Is Amplifying Data Breach Risks — and Enhancing Mitigation

While not yet ubiquitous, McKinsey & Company reported that by the end of 2024, nearly three-quarters of organizations worldwide were using AI in at least one business function. Unfortunately, bad actors are also adopting AI at a rapid rate, transforming the cybersecurity landscape by amplifying the sophistication, speed and scale of cyberattacks.
While laws alone won’t prevent cybercriminals from launching devastating attacks on organizations (they’re criminals, after all), the lack of United States federal laws governing the development and use of AI does not help. The U.S. lags behind the EU in general, with the EU AI Act passed in 2023. The Act introduces binding rules for general-purpose AI models and creates a European AI office to centralize governance structure.
Limitations to the Impact of AI Regulations
One way AI regulations can mitigate the risks of automated AI-driven attacks is by mandating cybersecurity standards for AI systems deployed in high-value sectors such as healthcare, energy and finance. This could help reduce the impact of attacks on critical infrastructure and sensitive data.
There are limits to the effectiveness of legislation of course. Laws cannot keep up with the pace of technology innovation and the rapid evolution of cyber threats. U.S. laws cannot stop foreign nation state threat actors from using AI maliciously. There are no borders when it comes to cybersecurity threats and data breaches, and many are global in nature.
Despite the lack of laws and regulations guiding the safe and proper use of AI, many organizations are “all in” on AI’s promise to improve operational efficiency, reduce costs, drive faster innovation and provide a competitive edge. Our recent survey of 2,000 global IT and sustainability leaders illustrates this enthusiasm: 83% have deployed some form of AI. 2,000 global IT and sustainability leaders illustrates this enthusiasm: 83% have deployed some form of AI.
AI Has Changed the Threat Landscape
Over a 12-month period from Q4 2022 to Q3 2023, researchers analyzed billions of threats across email, mobile, and browser channels, including malicious links, attachments, and social engineering via natural language messages. The research also included a deep dive into Dark Web activity, with a focus on how threat actors are leveraging Generative AI tools and chatbots. The report findings are chilling, revealing a 1,265% increase in malicious phishing emails and a 967% rise in credential phishing attacks, signaling a sharp escalation in AI-enabled cyber threats.
Whether they’re AI-enabled or not, data breaches are having a big impact on the majority of organizations. Eighty six percent of our survey’s respondents claim they’ve experienced a data breach in the last three years, with 96% of those having experienced a breach in the last 12 months.
There is little doubt that AI hasn’t just changed the threat landscape — it has dramatically transformed it — and where it goes from here is hard to predict with accuracy. The evolution of agentic AI may be the CISO’s next big challenge. Bad actors could easily build and deploy malicious AI agents to plan, adapt and execute cyberattacks with little to no human intervention. These could include targeted phishing or spear phishing campaigns that adjust in real time, or agents that can identify and exploit vulnerabilities automatically, across very large attach surfaces.
There is a silver lining to AI: it can strengthen organizations’ security posture through proactive threat detection and enable a more rapid response to increasingly sophisticated threats. Faster identification of anomalies and zero-day threats is critical in today’s fast-moving risk environment.
AI and Data Overload
Companies employing AI are already encountering emerging data security challenges unique to both the inputs and outputs of AI engines.
Training LLM models demands enormous volumes of data to improve accuracy. More data means a larger threat footprint. And, with more data comes greater regulatory responsibility — which AI may complicate. Nearly one quarter (23%) of our survey respondents claim AI has made it more difficult to achieve compliance with data protection regulations.
The good news is that AI, while generating more data and introducing new challenges, may help organizations refine data management. From data discovery and classification to sanitization, AI-enabled processes can minimize the amount of redundant, obsolete, or trivial (ROT) data and better protect the data that should remain.
As they generate more data (and thus expand the data attack surface), organizations will need to adapt their data lifecycle policies to the AI Era. Using AI to ensure regular data minimization will help mitigate risks of data breaches and leaks.
Four ways AI can help organizations mitigate risk through better data lifecycle management:
- Data Discovery and Classification: AI can be highly effective at automatically scanning and identifying sensitive or personal data across structured and unstructured sources, which include databases, documents and emails. Machine learning models are useful for classifying data by type (including PII, PHI, financial, etc.) and context.
- Automated Retention and Deletion Policies: AI can enforce policies based on retention schedules and contextual analysis, and then flag data that exceeds legal or business retention limits. From there, it can automatically schedule erasure or anonymization tasks. This can effectively reduce the propensity for over-retention, a common compliance issue.
- Risk Assessment to Data Sanitization: AI can assess the risk of retaining certain data based on a number of factors: sensitivity, age, relevance, past usage patterns and potential exposure impact. Once risk is scored, IT teams can prioritize what data should be permanently erased through data sanitization.
- Compliance Monitoring and Policy Updates: AI tools are very effective at monitoring ever-changing data and consumer privacy regulations, including CPRA, HIPAA and GDPR, and suggest updates to retention policies accordingly.
It’s Complicated: Using AI to Fight AI Threats
For today’s CISOs, AI represents both a growing threat and a transformative opportunity. Malicious actors are already exploiting AI to launch attacks at scale — through adaptive malware, deepfakes, and convincing social engineering tactics — forcing security leaders to confront an increasingly sophisticated adversary. At the same time, AI is driving impressive improvements in cybersecurity. This includes smarter data governance: enabling automated retention policies by identifying ROT data that should be erased, reducing data sprawl, and decreasing an organization’s overall risk.
The mandate is clear: security leaders must embrace AI as an essential tool, leveraging it to not only counter threats today, but to architect a leaner, more resilient, and more agile security posture for the future.
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!









