With New AI Executive Order, Security Burdens Shift to Users And Organizations

On December 11th, the Trump Administration released a new executive order (EO) that has completely changed the long-term trajectory of AI regulation in the United States, effectively taking AI regulatory power away from the states and putting it in the hands of the federal government.
In the short-term, this move has created disruption and risks that organizations will have to insulate themselves against. In the long-term, however, this is likely to be a positive development; while the executive order’s primary aim is to eventually nullify state laws, it also promises to do so as a first step toward creating centralized, nation-wide AI legislation, which is ultimately safer and more efficient than the patchwork of state-level laws that currently exist.
In the same way that international and harmonized standards allow organizations to understand the expectations of regulators, a federal requirement, rather than potentially fifty competing and conflicting standards, will ultimately be more predictable and efficient for compliance.
Below, I’ve outlined what this development means for organizations that use AI and the tactics that leaders can use to prepare for these changes, both over the short- and long-term.
What AI Regulation Looks Like Today, and Where It’s Headed in the Future
Before the EO on December 11th, various states had passed their own AI laws, leading to an uneven and sometimes contradictory web of regulations across the country.
California, for example, passed a series of laws in recent years that required AI companies to audit the safety of their models, limit discrimination, and disclose the presence or use of AI in certain settings. Texas, meanwhile, implemented different and less stringent AI regulations that were aimed at preventing discrimination in different ways and tried to limit other potential harms to users. South Dakota, Colorado and Utah have also passed their own laws.
The immediate consequence of the December 11 EO is that the futures of different state laws are now in question, though it’s important to note that they haven’t been formally nullified yet.
The EO asserts the federal government’s authority to make laws in this area and promises that a nation-wide framework will soon arise, but it doesn’t actually cancel out the state laws or create its own nation-wide framework — it just promises that these changes are soon to come.
As a result, organizations that use and make AI will have to grapple with uncertainty in the short term, as state laws are usurped and replaced.
Preparing for the Future of AI Regulation
To navigate today’s uncertain regulatory picture and insulate your organization from risk over the near and long term, leaders should follow these 3 best practices:
- Automate compliance: With AI regulation in flux for the foreseeable future, organizations should take steps to automate regulatory compliance, instead of trying to maintain compliance manually. Manual verification is still essential, but the fluid nature of AI regulation today means that it’s inefficient and risky to update compliance measures manually. Automated compliance removes the possibility of human error and saves your team time for more strategic initiatives.
- Secure and govern your data: At the moment, it’s not totally clear what obligations the makers of AI models and others in the AI industry have to their customers, and this will continue to be the case until the situation stabilizes and solidifies.
The uncertainty around regulation and compliance creates risk and vulnerability, which is why organizations should focus on controlling what they can by shoring up data security and governance, which limits vulnerability and gives organizations greater flexibility if and when the regulatory picture changes again. According to research, 75% of organizations that use AI experienced an AI-related data breach in 2025; the confusion surrounding AI regulation adds another layer of risk to a complex and delicate security situation.
- Rethink compliance as a continual challenge: One major takeaway from this evolving landscape is that organizations absolutely need to think of regulatory compliance as a fluid, continual challenge, rather than a static, point-in-time milestone that’s achieved and forgotten. Due to the breakneck speed of AI innovation, we’re going to see regulations continue to evolve over different regions for years to come. This is far from the final twist in the AI regulatory story.
At the end of the day, leaders can only control what they can control — and the actions of various governments and regulatory bodies aren’t on that list. That’s why it’s important to keep your organization secure, and to act nimbly as the situation continues to evolve.