Yesterday, the White House announced a sweeping executive order aimed to manage the risk of artificial intelligence (AI).

According to the press release, on October 30, President Joe Biden issued an executive order to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition and advance American leadership around the world.

The order aims to protect Americans from potential risks of AI systems by developing standards, tools and tests to ensure AI systems are safe, secure and trustworthy; establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software; and protect against the risks of using AI to engineer dangerous biological materials, among other risks.

“The Biden EO makes it clear: privacy, equity and civil rights in AI will be regulated,” said former U.S. National Security Agency (NSA) hacker and Faculty Member at IANS Research, Jake Williams. “In the startup world of ‘move fast and break things’, where technology often outpaces regulation, this EO sends a clear message on the areas startups should expect more regulation in the AI space.”

The executive order builds on previous actions, including the voluntary commitments from 15 companies to drive safe, secure and trustworthy development of AI.

According to the release, the executive order will direct the following actions:

  • New standards for AI safety and security
  • Protect Americans’ privacy
  • Advance equity and civil rights
  • Stand up for consumers, patients and students
  • Support workers
  • Promote innovation and competition
  • Advance American leadership abroad
  • Ensure responsible and effective government use of AI

Security leaders weigh in

Casey Ellis, Founder and CTO at Bugcrowd:

President Biden's Executive Order on artificial intelligence (AI) underscores a robust commitment to safety, cybersecurity and rigorous testing. The directive mandates developers to share safety test results with the U.S. government, ensuring AI systems are extensively vetted before public release. It highlights the importance of AI in bolstering cybersecurity, particularly in detecting AI-enabled fraud and enhancing software and network security. The order also champions the development of standards, tools and tests for AI's safety and security. Emphasis is placed on protecting Americans' privacy using advanced AI tools and techniques. Furthermore, the administration seeks international collaborations to set global standards for AI safety and cybersecurity. Overall, the order reflects a proactive approach to manage AI's promise while mitigating its potential risks.

Andre Durand, Founder and CEO, Ping Identity:

The executive order represents the first White House driven policy tied to AI regulation, and is a substantial step towards establishing more guidelines around the responsible use and development of AI. While the impact of AI on society has been profound for decades and will continue to persist, the EO aims to ensure a more secure and conscientious AI landscape. Safeguarding against its misuse and enforcing balanced regulation, means that we can embrace the benefits and future of trustworthy AI.

The EO also acknowledges that AI heavily relies on a constant flow of data, including user and device information, some of which may be sent to entities outside the U.S., making the need for stronger requirements around identity verification even more necessary. As criminals find novel ways to use AI, we can fight fire with fire and use AI - in responsible ways - to thwart their efforts. Organizations who adopt AI-driven solutions have the power to detect anomalies, enemy bots and prevent fraud at massive scale. Identity verification will also play a major role in stopping attacks going forward, so stronger requirements around identity proofing, authentication and federation will be necessary.

As we continue to see further regulations emerge, the private sector must also take part in the effort and collaborate with public stakeholders to achieve more responsible AI worldwide.

Marcus Fowler, CEO of Darktrace Federal:

AI has already made our personal and working lives easier, and its centrality to our lives is only poised to grow. But this also means that an attacker gaining control of an AI system could have serious consequences to infrastructure, a business, or our personal lives. This isn’t a risk that’s a decade away - it’s a risk right now. It’s positive that the Administration is working to establish standards to protect consumers as they use AI tools in their personal and working lives.

We firmly believe that you cannot achieve AI safety without cybersecurity: it is a prerequisite for safe and trusted general purpose AI. It’s also a challenge for the here-and-now, as well as a necessity for tackling longer term risks. Security needs to be by-design, embedded across every step of an AI system’s creation and deployment. That means taking action on data security, control and trust. It’s promising to see some specific actions in the Executive Order that start to address these challenges.

For example, we’re encouraged by the focus on protecting privacy and prioritizing the development and use of privacy-preserving techniques. We need to ensure AI does not compromise people’s privacy: companies need protect the data they collect and use to train their models. And outputs should be processed to ensure they don’t accidentally re-create protected data.

It’s encouraging to see the Administration will be taking actions to help achieve AI safety and to tackle the specific set of challenges posed by general purpose AI. These models can be used for a wide variety of purposes – both beneficial and harmful.

A compromise could negatively impact public trust in AI and derail its potential. An attacker gaining control of an AI system could have serious consequences to business, infrastructure and our personal lives. We’re already seeing indicators of security challenges posed by general purpose AI. It is lowering the barriers for attackers and making them faster; attackers are breaking general purpose AI tools to corrupt their outputs; and accidental insider threats can put IP or sensitive data at risk.

Cybersecurity is a prerequisite for safety, and so we hope to see more detail in the upcoming Executive Order outlining an approach to achieving more secure AI, and taking forward the commitments made by general purpose AI companies to tackle risks such as insider threats. This will help to achieve AI that is more privacy-preserving, predictable and reliable.

Andrew Barratt, Vice President at Coalfire:

President Biden’s Executive Order drives some very clear positive intentions for the use of AI, and the suggestions for testing and safety are all coming from a good place. The challenge with regulating the technology itself is that it might create some slowdown in the innovation. A great example of this is the protection required for the creation of dangerous biological materials. This could create strange scenarios where life sciences companies leveraging AI are already subject to very strict controls, start to try to pre-limit the use cases where AI is supporting them out of fear they might inadvertently something that is dangerous. The wording is also quite blunt as most pharma research and production inevitably produces something that is potentially dangerous, which is why we spend so much time on clinical testing and research. The follow-up is then a trade-off of good vs. bad. It feels like this should be very clearly directed towards the manufacturer of biological weapons, again something which is already tightly regulated in most western countries, and that regulating AI specifically adds minimal value to those already with strict laws. I doubt this would stop someone using the tech in a rogue nation/terrorist organization.

The cybersecurity message is one that the industry has already stepped up to. We’ve seen significant interest in the cyber-product space looking to integrate or leverage AI tools to manage typically high volume repetitive work as well as large models for complex threats. The real question is given the proliferation of AI vendors now, it’s becoming very conceivable that sophisticated threat actors will leverage multiple AI platforms to create code that continually evades detection triggering yet another technological arms race.

Timothy Morris, Chief Security Advisor at Tanium:

The main objectives of the AI executive order are to ensure that AI innovation is done safely and securely. It attempts to address several issues and expands upon the voluntary commitments that were made by 15 companies in September (like OpenAI, Google, Nvidia, Adobe, etc…). The EO will attempt to address immigration issues with the H1-B program to attract skills for AI talent, so that the U.S.'s technological advantage will be strengthened. This could include speeding up the process of that VISA program for highly skilled workers.

Regulations are intended to protect consumers/civilians against a wide array of possible abuses. Using the federal government's purchasing power can have heavy influence on any new technology. However, with any new innovations, regulations and red tape slow them down. The federal government can require agencies to perform evaluations of AI models to ensure they are safe and biases are limited or removed before a federal worker could use them. "Red-teaming" exercises are a type of evaluation that can be done to against AI and LLMs to accomplish this.

I can imagine that all departments within federal government agencies can be affected. The Departments of Defense and Energy are key ones that could assess AI to bolster national cybersecurity.

Privacy is something that will need to be baked into any AI regulation. And it isn't an easy problem to solve. Copyright infringement is another doozy. Deepfakes (images, video, audio) are all real risk of AI technology that can be used for harm. I would also expect there to be parts that speak to how AI is used (or not allowed) in elections with an upcoming election year.

Craig Jones, Vice President of Security Operations at Ontinue:

Given the rapidly changing landscape of cyber threats, it's no surprise that certain skills are particularly valuable. As cyber threats continue to become more complex, the application of AI and ML in cybersecurity has become indispensable. AI and ML algorithms are capable of learning from historical data and recognizing patterns in order to detect and counteract potential threats more efficiently than humans could. This technology has also been used to automate routine tasks, freeing up cyber security personnel to focus on more strategic initiatives. An analyst that is particularly skilled at prompt engineering will be able to bring an efficiency in the use of AI LLMs which will have an incredibly positive impact on the operation.

Jake Williams, former U.S. National Security Agency (NSA) hacker and Faculty member at IANS Research:

While it is significant that the Biden AI Executive Order (EO) regulates foundation models, most organizations won't be training foundation models. This provision is meant to protect society at large and will have minimal direct impact to most organizations.

The EO places emphasis on detection of AI generated content and creating measures to ensure the authenticity of content. While this will likely appease many in government who are profoundly concerned about deepfake content, as a practical matter, generation technologies will always outpace those used for detection. Furthermore, many AI detection systems would require levels of privacy intrusion that most would find unacceptable.

The risk of using generative AI for biological material synthesis is very real. Early ChatGPT boosters were quick to note the possibility of using the tool for "brainstorming" new drug compounds — as if this could replace pharmaceutical researchers (or imply that they weren't already using more specialized AI tools). The impact of using generative AI for synthesizing new biological mutations, without any understanding of the impacts, is a real risk and it's great to see federal funding being tied to the newly proposed AI safety standards.

Perhaps the most significant contribution of the EO is dedicating funding for research into privacy preserving technologies with AI. The emphasis on privacy and civil rights in AI use permeates the EO. At a societal level, the largest near-term risk of AI technologies is how they are used and what tasks they are entrusted with.