Trump Administration Issues New AI Executive Order

On Dec. 11, the Trump Administration announced a new executive order with the intention “to remove barriers to United States AI leadership.” The order asserts the U.S. is in “a race with adversaries” for AI dominance, and the presence of “cumbersome regulation” impedes the nation’s progress — particularly, state regulation.
“First, State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups,” reads the order. “Second, State laws are increasingly responsible for requiring entities to embed ideological bias within models. For example, a new Colorado law banning ‘algorithmic discrimination’ may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups. Third, State laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.”
According to the administration, the purpose of this order is to minimize the burden of regulatory framework surrounding AI, creating one national standard. This standard will forbid any state regulations that conflict with the policies set forth by the new national standard.
Andrew Bolster, Senior R&D Manager at Black Duck, comments, “As with any technological disruption, the balance between commercial innovation and public safety is a nuanced and complex area, particularly when it comes to the growth of ‘human-like intelligences’ as we’ve seen in the past years of AI research.
“Innovators need the wide scale commercial protection of consistent, or at least translatable, regulatory regimes to grow into new and existing markets. Piecemeal and fractured regulations emerging across the U.S. would present a huge challenge to innovators, and the application of consistent regulatory guardrails at the federal level would improve that posture. However, just as it’s important for Innovators to have a consistent regulatory regime, it’s important for that regulatory regime to be seen as stable in the long term for Investors, and a knee-jerk ‘rulebook’ that gets overturned in another administration would be just as challenging to growth as a fractured constellation of regimes.
“In other markets such as the EU and China, there are stronger overarching regulatory regimes, and while these may not be as immediately beneficial to major tech companies, once established they can be expected to last long enough for structural ongoing investment in this existing but risky arena.”
How Should Organizations React?
Some security leaders anticipate this order will be met with opposition.
Mike Hamilton, former CISO of the City of Seattle and CTO of PISCES International, says, “States will most certainly sue the federal government, and an attempt to ban regulation at the state level is likely to make it to the SCOTUS. However, there are already examples that can be used as precedents, for example New York’s Department of Financial Services and Department of Health already regulate finance and healthcare, as a means of mitigating the dysfunction of Congress and executive orders that have pulled back on regulation at the federal level.
“Any serious attempt by the federal government to preempt state regulation will be litigated and will likely get to the Supreme Court. Justices that are sympathetic to the unitary executive theory may indeed find for the administration. This would wildly reduce, and possibly eliminate, states’ ability to regulate at all and put the entire issue of states’ rights in jeopardy. It would also exacerbate the international arms race for AI dominance and reduce trust in AI tools writ large.”
Whether or not state regulations are preempted, organizations must remain vigilant on the AI frontier — which means placing guardrails around their own use of AI.
Diana Kelley, Chief Information Security Officer at Noma Security, asserts, “Regardless of how AI regulations are structured, organizations need deep observability and strong governance to ensure AI systems operate as intended. Regulations often set the floor, not the ceiling. What truly protects people and businesses as they adopt and innovate with AI is the continuous ability to track model and agent provenance, observe how systems perform during testing and runtime, validate their outputs, and detect when AI or AI agents drift into unsafe or unintended territory. Transparency and explainability are essential because they help us understand the factors driving agentic AI outputs and actions. Day to day AI safety comes from disciplined oversight that reduces unnecessary risk and prevents harm.”
Looking for a reprint of this article?
From high-res PDFs to custom plaques, order your copy today!








