Security Magazine logo
search
cart
facebook twitter linkedin youtube
  • Sign In
  • Create Account
  • Sign Out
  • My Account
Security Magazine logo
  • NEWS
    • Security Newswire
    • Technologies & Solutions
  • MANAGEMENT
    • Leadership Management
    • Enterprise Services
    • Security Education & Training
    • Logical Security
    • Security & Business Resilience
    • Profiles in Excellence
  • PHYSICAL
    • Access Management
    • Fire & Life Safety
    • Identity Management
    • Physical Security
    • Video Surveillance
    • Case Studies (Physical)
  • CYBER
    • Cybersecurity News
    • More
  • BLOG
  • COLUMNS
    • Cyber Tactics
    • Leadership & Management
    • Security Talk
    • Career Intelligence
    • Leader to Leader
    • Cybersecurity Education & Training
  • EXCLUSIVES
    • Annual Guarding Report
    • Most Influential People in Security
    • The Security Benchmark Report
    • The Security Leadership Issue
    • Top Guard and Security Officer Companies
    • Top Cybersecurity Leaders
    • Women in Security
  • SECTORS
    • Arenas / Stadiums / Leagues / Entertainment
    • Banking/Finance/Insurance
    • Construction, Real Estate, Property Management
    • Education: K-12
    • Education: University
    • Government: Federal, State and Local
    • Hospitality & Casinos
    • Hospitals & Medical Centers
    • Infrastructure:Electric,Gas & Water
    • Ports: Sea, Land, & Air
    • Retail/Restaurants/Convenience
    • Transportation/Logistics/Supply Chain/Distribution/ Warehousing
  • EVENTS
    • Industry Events
    • Webinars
    • Solutions by Sector
    • Security 500 Conference
  • MEDIA
    • Videos
      • Cybersecurity & Geopolitical Discussion
      • Ask Me Anything (AMA) Series
    • Podcasts
    • Polls
    • Photo Galleries
  • MORE
    • Call for Entries
    • Classifieds & Job Listings
    • Continuing Education
    • Newsletter
    • Sponsor Insights
    • Store
    • White Papers
  • EMAG
    • eMagazine
    • This Month's Content
    • Advertise
  • SIGN UP!
CybersecurityManagementSecurity Leadership and ManagementCybersecurity News

White House announces executive order to manage AI risk

By Rachelle Blair-Frasier, Editor in Chief
artificial intelligence AI graphic

Image via Pixabay

October 31, 2023

Yesterday, the White House announced a sweeping executive order aimed to manage the risk of artificial intelligence (AI).

According to the press release, on October 30, President Joe Biden issued an executive order to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition and advance American leadership around the world.

The order aims to protect Americans from potential risks of AI systems by developing standards, tools and tests to ensure AI systems are safe, secure and trustworthy; establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software; and protect against the risks of using AI to engineer dangerous biological materials, among other risks.

“The Biden EO makes it clear: privacy, equity and civil rights in AI will be regulated,” said former U.S. National Security Agency (NSA) hacker and Faculty Member at IANS Research, Jake Williams. “In the startup world of ‘move fast and break things’, where technology often outpaces regulation, this EO sends a clear message on the areas startups should expect more regulation in the AI space.”

The executive order builds on previous actions, including the voluntary commitments from 15 companies to drive safe, secure and trustworthy development of AI.

According to the release, the executive order will direct the following actions:

  • New standards for AI safety and security
  • Protect Americans’ privacy
  • Advance equity and civil rights
  • Stand up for consumers, patients and students
  • Support workers
  • Promote innovation and competition
  • Advance American leadership abroad
  • Ensure responsible and effective government use of AI

Security leaders weigh in

Casey Ellis, Founder and CTO at Bugcrowd:

President Biden's Executive Order on artificial intelligence (AI) underscores a robust commitment to safety, cybersecurity and rigorous testing. The directive mandates developers to share safety test results with the U.S. government, ensuring AI systems are extensively vetted before public release. It highlights the importance of AI in bolstering cybersecurity, particularly in detecting AI-enabled fraud and enhancing software and network security. The order also champions the development of standards, tools and tests for AI's safety and security. Emphasis is placed on protecting Americans' privacy using advanced AI tools and techniques. Furthermore, the administration seeks international collaborations to set global standards for AI safety and cybersecurity. Overall, the order reflects a proactive approach to manage AI's promise while mitigating its potential risks.

Andre Durand, Founder and CEO, Ping Identity:

The executive order represents the first White House driven policy tied to AI regulation, and is a substantial step towards establishing more guidelines around the responsible use and development of AI. While the impact of AI on society has been profound for decades and will continue to persist, the EO aims to ensure a more secure and conscientious AI landscape. Safeguarding against its misuse and enforcing balanced regulation, means that we can embrace the benefits and future of trustworthy AI.

The EO also acknowledges that AI heavily relies on a constant flow of data, including user and device information, some of which may be sent to entities outside the U.S., making the need for stronger requirements around identity verification even more necessary. As criminals find novel ways to use AI, we can fight fire with fire and use AI - in responsible ways - to thwart their efforts. Organizations who adopt AI-driven solutions have the power to detect anomalies, enemy bots and prevent fraud at massive scale. Identity verification will also play a major role in stopping attacks going forward, so stronger requirements around identity proofing, authentication and federation will be necessary.

As we continue to see further regulations emerge, the private sector must also take part in the effort and collaborate with public stakeholders to achieve more responsible AI worldwide.

Marcus Fowler, CEO of Darktrace Federal:

AI has already made our personal and working lives easier, and its centrality to our lives is only poised to grow. But this also means that an attacker gaining control of an AI system could have serious consequences to infrastructure, a business, or our personal lives. This isn’t a risk that’s a decade away - it’s a risk right now. It’s positive that the Administration is working to establish standards to protect consumers as they use AI tools in their personal and working lives.

We firmly believe that you cannot achieve AI safety without cybersecurity: it is a prerequisite for safe and trusted general purpose AI. It’s also a challenge for the here-and-now, as well as a necessity for tackling longer term risks. Security needs to be by-design, embedded across every step of an AI system’s creation and deployment. That means taking action on data security, control and trust. It’s promising to see some specific actions in the Executive Order that start to address these challenges.

For example, we’re encouraged by the focus on protecting privacy and prioritizing the development and use of privacy-preserving techniques. We need to ensure AI does not compromise people’s privacy: companies need protect the data they collect and use to train their models. And outputs should be processed to ensure they don’t accidentally re-create protected data.

It’s encouraging to see the Administration will be taking actions to help achieve AI safety and to tackle the specific set of challenges posed by general purpose AI. These models can be used for a wide variety of purposes – both beneficial and harmful.

A compromise could negatively impact public trust in AI and derail its potential. An attacker gaining control of an AI system could have serious consequences to business, infrastructure and our personal lives. We’re already seeing indicators of security challenges posed by general purpose AI. It is lowering the barriers for attackers and making them faster; attackers are breaking general purpose AI tools to corrupt their outputs; and accidental insider threats can put IP or sensitive data at risk.

Cybersecurity is a prerequisite for safety, and so we hope to see more detail in the upcoming Executive Order outlining an approach to achieving more secure AI, and taking forward the commitments made by general purpose AI companies to tackle risks such as insider threats. This will help to achieve AI that is more privacy-preserving, predictable and reliable.

Andrew Barratt, Vice President at Coalfire:

President Biden’s Executive Order drives some very clear positive intentions for the use of AI, and the suggestions for testing and safety are all coming from a good place. The challenge with regulating the technology itself is that it might create some slowdown in the innovation. A great example of this is the protection required for the creation of dangerous biological materials. This could create strange scenarios where life sciences companies leveraging AI are already subject to very strict controls, start to try to pre-limit the use cases where AI is supporting them out of fear they might inadvertently something that is dangerous. The wording is also quite blunt as most pharma research and production inevitably produces something that is potentially dangerous, which is why we spend so much time on clinical testing and research. The follow-up is then a trade-off of good vs. bad. It feels like this should be very clearly directed towards the manufacturer of biological weapons, again something which is already tightly regulated in most western countries, and that regulating AI specifically adds minimal value to those already with strict laws. I doubt this would stop someone using the tech in a rogue nation/terrorist organization.

The cybersecurity message is one that the industry has already stepped up to. We’ve seen significant interest in the cyber-product space looking to integrate or leverage AI tools to manage typically high volume repetitive work as well as large models for complex threats. The real question is given the proliferation of AI vendors now, it’s becoming very conceivable that sophisticated threat actors will leverage multiple AI platforms to create code that continually evades detection triggering yet another technological arms race.

Timothy Morris, Chief Security Advisor at Tanium:

The main objectives of the AI executive order are to ensure that AI innovation is done safely and securely. It attempts to address several issues and expands upon the voluntary commitments that were made by 15 companies in September (like OpenAI, Google, Nvidia, Adobe, etc…). The EO will attempt to address immigration issues with the H1-B program to attract skills for AI talent, so that the U.S.'s technological advantage will be strengthened. This could include speeding up the process of that VISA program for highly skilled workers.

Regulations are intended to protect consumers/civilians against a wide array of possible abuses. Using the federal government's purchasing power can have heavy influence on any new technology. However, with any new innovations, regulations and red tape slow them down. The federal government can require agencies to perform evaluations of AI models to ensure they are safe and biases are limited or removed before a federal worker could use them. "Red-teaming" exercises are a type of evaluation that can be done to against AI and LLMs to accomplish this.

I can imagine that all departments within federal government agencies can be affected. The Departments of Defense and Energy are key ones that could assess AI to bolster national cybersecurity.

Privacy is something that will need to be baked into any AI regulation. And it isn't an easy problem to solve. Copyright infringement is another doozy. Deepfakes (images, video, audio) are all real risk of AI technology that can be used for harm. I would also expect there to be parts that speak to how AI is used (or not allowed) in elections with an upcoming election year.

Craig Jones, Vice President of Security Operations at Ontinue:

Given the rapidly changing landscape of cyber threats, it's no surprise that certain skills are particularly valuable. As cyber threats continue to become more complex, the application of AI and ML in cybersecurity has become indispensable. AI and ML algorithms are capable of learning from historical data and recognizing patterns in order to detect and counteract potential threats more efficiently than humans could. This technology has also been used to automate routine tasks, freeing up cyber security personnel to focus on more strategic initiatives. An analyst that is particularly skilled at prompt engineering will be able to bring an efficiency in the use of AI LLMs which will have an incredibly positive impact on the operation.

Jake Williams, former U.S. National Security Agency (NSA) hacker and Faculty member at IANS Research:

While it is significant that the Biden AI Executive Order (EO) regulates foundation models, most organizations won't be training foundation models. This provision is meant to protect society at large and will have minimal direct impact to most organizations.

The EO places emphasis on detection of AI generated content and creating measures to ensure the authenticity of content. While this will likely appease many in government who are profoundly concerned about deepfake content, as a practical matter, generation technologies will always outpace those used for detection. Furthermore, many AI detection systems would require levels of privacy intrusion that most would find unacceptable.

The risk of using generative AI for biological material synthesis is very real. Early ChatGPT boosters were quick to note the possibility of using the tool for "brainstorming" new drug compounds — as if this could replace pharmaceutical researchers (or imply that they weren't already using more specialized AI tools). The impact of using generative AI for synthesizing new biological mutations, without any understanding of the impacts, is a real risk and it's great to see federal funding being tied to the newly proposed AI safety standards.

Perhaps the most significant contribution of the EO is dedicating funding for research into privacy preserving technologies with AI. The emphasis on privacy and civil rights in AI use permeates the EO. At a societal level, the largest near-term risk of AI technologies is how they are used and what tasks they are entrusted with.

KEYWORDS: artificial intelligence (AI) deepfakes regulations White House cybersecurity

Share This Story

Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!

Rachelle blairfrasier headshot white

Rachelle Blair-Frasier is Security magazine’s Editor in Chief. Blair-Frasier handles eMagazine features, as well as writes and publishes online news and web exclusives on topics including physical security, risk management, cybersecurity and emerging industry trends. She helps coordinate multimedia content and manages Security magazine's social media presence, in addition to working with security leaders to publish industry insights. Blair-Frasier brings more than 15 years of journalism and B2B writing and editorial experience to the role.

Recommended Content

JOIN TODAY
To unlock your recommendations.

Already have an account? Sign In

  • Security's Top Cybersecurity Leaders 2024

    Security's Top Cybersecurity Leaders 2024

    Security magazine's Top Cybersecurity Leaders 2024 award...
    Top Cybersecurity Leaders
    By: Security Staff
  • cyber brain

    The intersection of cybersecurity and artificial intelligence

    Artificial intelligence (AI) is a valuable cybersecurity...
    Cyber Tactics Column
    By: Pam Nigro
  • artificial intelligence AI graphic

    Assessing the pros and cons of AI for cybersecurity

    Artificial intelligence (AI) has significant implications...
    New Security Technology
    By: Charles Denyer
Subscribe For Free!
  • Security eNewsletter & Other eNews Alerts
  • eMagazine Subscriptions
  • Manage My Preferences
  • Online Registration
  • Mobile App
  • Subscription Customer Service

More Videos

Sponsored Content

Sponsored Content is a special paid section where industry companies provide high quality, objective, non-commercial content around topics of interest to the Security audience. All Sponsored Content is supplied by the advertising company and any opinions expressed in this article are those of the author and not necessarily reflect the views of Security or its parent company, BNP Media. Interested in participating in our Sponsored Content section? Contact your local rep!

close
  • Crisis Response Team
    Sponsored byEverbridge

    Automate or Fall Behind – Crisis Response at the Speed of Risk

  • Perimeter security
    Sponsored byAMAROK

    Why Property Security is the New Competitive Advantage

  • Duty of Care
    Sponsored byAMAROK

    Integrating Technology and Physical Security to Advance Duty of Care

Popular Stories

Red laptop

Cybersecurity leaders discuss Oracle’s second recent hack

Pills spilled

More than 20,000 sensitive medical records exposed

Coding on screen

Research reveals mass scanning and exploitation campaigns

Laptop in darkness

Verizon 2025 Data Breach Investigations Report shows rise in cyberattacks

Computer with binary code hovering nearby

Cyberattacks Targeting US Increased by 136%

2025 Security Benchmark banner

Events

May 22, 2025

Proactive Crisis Communication

Crisis doesn't wait for the right time - it strikes when least expected. Is your team prepared to communicate clearly and effectively when it matters most?

November 17, 2025

SECURITY 500 Conference

This event is designed to provide security executives, government officials and leaders of industry with vital information on how to elevate their programs while allowing attendees to share their strategies and solutions with other security industry executives.

View All Submit An Event

Products

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

See More Products

Related Articles

  • white house behind trees

    Biden-⁠Harris announce key AI actions following landmark executive order

    See More
  • Resilience

    Knowledge is key to mitigate risk, maximize resilience

    See More
  • vertical lights on black background

    White House announces plan to encourage safe AI use

    See More

Related Products

See More Products
  • security culture.webp

    Security Culture: A How-to Guide for Improving Security Culture and Dealing with People Risk in Your Organisation

  • Risk-Analysis.gif

    Risk Analysis and the Security Survey, 4th Edition

  • physical security.webp

    Physical Security Assessment Handbook An Insider’s Guide to Securing a Business

See More Products

Events

View AllSubmit An Event
  • March 6, 2025

    Why Mobile Device Response is Key to Managing Data Risk

    ON DEMAND: Most organizations and their associating operations have the response and investigation of computers, cloud resources, and other endpoint technologies under lock and key. 
  • November 14, 2024

    Best Practices for Integrating AI Responsibly

    ON DEMAND: Discover how artificial intelligence is reshaping the business landscape. AI holds immense potential to revolutionize industries, but with it comes complex questions about its risks and rewards.
View AllSubmit An Event
×

Sign-up to receive top management & result-driven techniques in the industry.

Join over 20,000+ industry leaders who receive our premium content.

SIGN UP TODAY!
  • RESOURCES
    • Advertise
    • Contact Us
    • Store
    • Want More
  • SIGN UP TODAY
    • Create Account
    • eMagazine
    • eNewsletter
    • Customer Service
    • Manage Preferences
  • SERVICES
    • Marketing Services
    • Reprints
    • Market Research
    • List Rental
    • Survey/Respondent Access
  • STAY CONNECTED
    • LinkedIn
    • Facebook
    • YouTube
    • X (Twitter)
  • PRIVACY
    • PRIVACY POLICY
    • TERMS & CONDITIONS
    • DO NOT SELL MY PERSONAL INFORMATION
    • PRIVACY REQUEST
    • ACCESSIBILITY

Copyright ©2025. All Rights Reserved BNP Media.

Design, CMS, Hosting & Web Development :: ePublishing