The landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released in October 2023, encourages federal agencies to use their existing authorities to leverage AI and effectively deploy it to defend against cyber threats. 

At the same time, the Executive Order directs agencies to address AI systems’ pressing security risks. Given the transformational potential of AI, it is undoubtedly essential to carefully deploy and evaluate all AI systems to ensure they are ethically operated and resilient against cyberattacks. 

It’s evident that AI is both a security concern and a powerful cybersecurity tool. If agencies take the necessary steps to rigorously test and monitor their AI systems, and base them on accurate, secure data sets, AI models can be utilized to meet urgent cybersecurity needs and zero-trust goals. 

Securing AI — turning a potential risk into a powerful asset

The latest National Cybersecurity Strategy identifies AI as a “revolutionary change in the technology landscape.” To optimize AI’s value as a cybersecurity tool while also taking steps to strengthen the security of AI systems themselves, the NCS recommends strategic investments in research and development, education and private-sector partnerships.

In addition to thorough collaboration and R&D, rigorous pre-deployment testing and continual evaluations to monitor post-deployment performance are essential to shore up defenses for AI systems. Developers can also equip AI models with systems to proactively alert users to any unexpected shifts that occur, allowing them to quickly detect misuse. These safeguards help ensure that the AI system is functioning as intended and is well-defended against adversaries. 

Data security and governance are also key to defending AI systems and ensuring that any information generated by the system is accurate. Each data set should be carefully selected and vetted for the given use case. Considerations to enhance data quality include accuracy, completion, consistency, timeliness and originality. Moreover, holistic data governance should consider all roles, responsibilities and regulations that impact an organization’s data usage. 

As federal laws and policies evolve to account for emerging AI and machine learning (ML) models, it’s paramount that any applicable system is compliant with such regulations. Moreover, it’s critical that the public, private and academic sectors work in close coordination to develop stringent security policies without inhibiting innovation. 

If the necessary security precautions are taken, emerging cyber technologies driven by advanced analytics, AI and machine learning can automate threat detection, mitigation and response while providing greater visibility across the enterprise. These practices can dramatically strengthen an agency’s security posture and be used to successfully combat the ever-evolving threat landscape. 

AI’s cybersecurity applications and advantages

AI-powered systems can use machine learning algorithms to learn from previous attacks and continuously adapt their defenses. To effectively deploy AI against cyber threats, agencies should look to machine learning operations (MLOps) and model operations (ModelOps). 

ModelOps is a framework to help data scientists manage, govern, and secure machine learning, predictive analytics and AI models. MLOps, a subset of ModelOps, is a set of practices that automate and simplify ML workflows and deployments. Its integration with cybersecurity can enable agencies to respond to threats in real-time and gain key insights from troves of data to improve their overall security posture. In addition to faster detection and response times, MLOps and ModelOps tools can help agencies manage and maintain machine learning models at scale.

AI can also drive anomaly detection in ways that traditional security methods cannot. By analyzing network traffic to identify abnormal patterns and activity via statistical and behavioral analytics, AI systems can identify potential threats and anomalies. Once alerted to the anomaly, agencies can take swift, decisive action and mitigate the consequences of a harmful cyberattack. With continuous monitoring and real-time reporting, advanced AI algorithms can ensure agencies remain informed of their cyber status. 

While bolstering cyber defenses is essential, federal and defense agencies should also prioritize proactive threat hunting and intelligence to increase visibility into threats, improve decision-making abilities and identify effective counteroffensive measures. 

Using AI for zero trust

2024 will be a pivotal year for the federal cybersecurity landscape. Notably, a memorandum issued in response to the Executive Order on Improving the Nation’s Cybersecurity requires federal agencies to achieve specific zero trust security goals by the end of fiscal year 2024. 

While considerable progress has been made in the past few years to move toward a zero-trust architecture, there is a long way yet to go to achieve the memo’s goals. Harnessing the power of AI-powered cybersecurity tools can help federal agencies meet current zero-trust objectives and fortify the nation’s digital infrastructure.

As agencies strive to achieve a mature zero-trust security architecture, attack surface management (ASM) will be an essential component. ASM provides a comprehensive understanding of an agency's attack surface, encompassing all assets, including physical and virtual devices, networks, applications and data. Through ASM, agencies can quickly identify, classify and evaluate their assets, enabling them to prioritize their security efforts and more efficiently mitigate risks. 

AI-driven automation can dramatically shorten the time it takes for cybersecurity personnel to understand and act on surface risks. Automation empowers security teams to rapidly sort through vast databases of information, and make proactive, data-driven decisions to mitigate their organization’s attack surface and identify threats before they can carry out any malicious activity. 

Amid the shift to zero-trust principles, ASM, anomaly detection and ModelOps will play critical roles in securing the federal government’s digital ecosystem.

Key considerations for effective and responsible AI deployment

In many instances, the training of an AI model is iterative and requires close oversight, testing and validation to ensure the model is producing the desired outcome. Notably, it’s unlikely that every potential variable will be completely known and accounted for prior to deploying the AI into an operational environment. As such, it’s imperative to have mechanisms in place to monitor the AI system after deployment as new and unique scenarios arise. 

As AI continues to evolve at an exponential rate, collaboration, education and transparency will be crucial. For federal agencies seeking to extract the maximum value from AI-powered technology without opening themselves up to security threats, partnerships with organizations that not only possess an in-depth understanding of AI systems, but also have intimate knowledge of public sector processes and requirements will be invaluable. 

As noted in the National Cybersecurity Strategy, strategic investments in research and development for revolutionary technologies such as AI are necessary to “drive outcomes that are economically sustainable and serve the national interest.”

Federal agencies, industry partners, and leading academics must work together to shore up AI security and utilize AI as an unprecedented cybersecurity tool. In doing so, AI can become the foundation for a future where agencies can proactively identify, understand, and address cyber threats before it’s too late.