What America’s AI Action Plan Means for AI Governance
AI governance is still on the U.S. AI policy menu | Edition #24
Hey 👋
I’m Oliver Patel, author and creator of Enterprise AI Governance.
This free newsletter delivers practical, actionable, and timely insights for AI governance professionals.
My goal is simple: to empower you to understand, implement, and master AI governance.
If you haven’t already, sign up below and share it with your colleagues. Thank you!
This week’s newsletter is a deep-dive analysis of Winning The Race: America’s AI Action Plan, a major policy document published by The White House on 23rd July 2025. The AI Action Plan sets out the AI policy priorities of the Trump administration.
Whilst the topics of AI governance and responsible AI have been deprioritised by the administration—with the focus firmly on accelerating AI innovation, supporting the technology sector, and deregulating to ease compliance burdens— it is overly simplistic to say that AI governance is no longer relevant in the U.S.
And it is misguided to claim that companies operating in the U.S. should not take seriously the ethical, legal, and regulatory risks posed by AI development and use, just because of the policy shift at the federal level.
Although AI governance has been demoted from a main course to a side dish, it still plays an important role on the U.S. AI policy menu. This is for three core reasons:
The AI Action Plan itself highlights many significant risks of AI, such as interpretability, robustness, and misalignment, and directs and allocates federal funding and resources to research and mitigate these risks. It also acknowledges the importance of AI standards and assurance for safe deployment in high-risk domains.
Also, the Office of Management and Budget (OMB) has issued two detailed memoranda to federal agencies, requiring the establishment and implementation of effective AI governance structures, policies, and processes, and stating that “effective AI governance is key to accelerated innovation”.
Finally, there are over 130 state AI laws, including several laws in California and Colorado’s ‘AI Act’, as well as various existing federal laws (e.g., on medical devices) that are relevant for AI. Taken together, this entails a complex patchwork of compliance obligations for U.S. firms.
The purpose of this article is not to downplay or gloss over the meaningful AI policy pivot and ideological reframing advanced by this administration, but to counter the emerging narrative that AI governance and responsible use of AI is no longer relevant in the U.S., in light of these political shifts.
If you think AI governance is not relevant for your organisation because the current administration is pro-AI, you will be in for a rude awakening if things go wrong. Deregulation does not means that AI risks no longer apply to you.
Setting the scene: from Biden to Trump
President Trump’s ‘America First’ mantra is not just about tariffs, defence spending, and immigration; it also applies to AI. Although successive governments have prioritised strengthening U.S global leadership in AI, there are major differences between Trump and Biden’s respective approaches.
Immediately upon assuming office, Trump revoked Biden’s Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of AI, describing it as a harmful “barrier to American AI innovation”. This was significant, as the Executive Order represented the first meaningful attempt to promote responsible AI via regulation of private sector AI activities. Under this regime (which was never implemented), developers of the most powerful AI models were obliged to perform safety and security testing, and report results back to the U.S. government.
Shortly thereafter, Trump signed Executive Order 14179: Removing Barriers to American Leadership in AI. This confirmed that the administration’s AI policy objective is to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security”.
This Executive Order led to the publication of ‘Winning the Race: America’s AI Action Plan’ in July 2025. This document outlines the policy ambitions and measures that will be prioritised. It sets out how the U.S. can achieve and maintain “unquestioned and unchallenged global technological dominance”. The AI Action Plan was the result of a two-month public consultation, which drew over 10,000 responses from a multitude of organisations.
The second Trump administration has consistent in its strong critique of any measures to impose (or retain) regulatory obligations on the AI sector that could impede or slow innovation.
The AI Action Plan: a new policy direction
The AI Action Plan contains dozens of recommended policy actions, spread across three core pillars that drive the policy agenda:
Pillar 1. Accelerate AI Innovation
Pillar 2. Build American AI Infrastructure
Pillar 3. Lead in International AI Diplomacy and Security
Taken together, these policy actions represent a major programme of work across the federal government. If implemented, they will fundamentally reshape the U.S government’s approach to AI and how it supported, developed, and deployed, both nationwide and internationally.
The AI Action Plan is underpinned by three cross-cutting principles:
Safeguard American workers impacted by the AI revolution.
Ensure AI systems are free from “ideological bias” and pursue “objective truth”.
Prevent malicious actors from misusing or stealing advanced AI technologies
The majority of the document outlines detailed and specific recommended policy actions spanning each of the three pillars. Below, I provide a snapshot of six of the most relevant and interesting policy actions within each pillar, that AI governance leaders should be aware of.
Pillar 1. Accelerate AI Innovation
Snapshot of recommended policy actions
Take action to review and remove any existing Federal regulations that impede AI innovation.
Withhold funding for AI-related initiatives from states with “burdensome AI regulations”.
Prioritise AI literacy and “AI skill development” across all education levels, as well in the wider workforce, by making it a core objective of education and workforce funding programmes.
Leverage federal research funding to tackle risks and challenges relating to AI interpretability, explainability, robustness, and misalignment, as these factors could impede safe deployment in sensitive domains, like national security.
Ensure the federal government only procures “unbiased” and “ideologically neutral” large language models. This is reinforced by Executive Order 14319: Preventing Woke AI in the Federal Government, which accompanied the publication of the AI Action Plan.
Revise the NIST AI Risk Management Framework, which has been widely adopted, to “ eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change”.
Key takeaways: the most significant part of this section is the direct link made between AI regulations and the ability of companies to innovate at speed. Put simply, the Trump administration believes that it is too early to regulate AI and that doing so would have damaging economic and security consequences for the U.S., not least due to the rapid AI advances made by China in recent years. That is why the focus of the AI Action Plan is on deregulation and making life easier for AI companies. Furthermore, although the proposed 10-year moratorium on state AI laws was comprehensively rejected by the Senate, the core idea survives in a different form, returning now as a policy recommendation to withhold AI funding for states with AI regulations that go beyond what the administration deems to be acceptable.
Pillar 2. Build American AI Infrastructure
Snapshot of recommended policy actions
Fast-track and streamline processes for data centre construction review, approval, and licensing.
Leverage and make available federal government land for data centre and AI infrastructure construction.
Stabilise the grid, ensuring consistent and reliable power generation capacity nationwide.
Promote adoption of “classified compute environments” for sensitive government AI processing.
Initiate a national initiative to identify high-priority AI infrastructure occupations and support training programmes for workers in these fields.
Update the Department of Defense’s Responsible AI Frameworks and Toolkits, focusing on security-by-design.
These policy recommendations are reinforced by Executive Order 14318: Accelerating Federal Permitting of Data Center Infrastructure, which accompanied the AI Action Plan.
Key takeaways: This pillar reflects the administration's recognition that “winning the AI race” is as much about hardware and the enabling infrastructure as it is about software and developing frontier AI models. The focus on fast-tracking data centre approvals and leveraging federal land indicates a direct response to China's state-backed, ever-accelerating AI infrastructure development.
Pillar 3. Lead in International AI Diplomacy and Security
Snapshot of recommended policy actions
Boost exports of U.S. AI technologies, including hardware, models, software, applications, and standards, by creating and promoting “full-stack AI export packages”.
Advocate for “pro-innovation” approaches to international AI governance, that reflect “American values” and shift away from “burdensome regulations”.
Counter Chinese influence in international governance forums and AI standard-setting organisations.
Strengthen enforcement of existing AI export controls. This includes leveraging advanced chip location verification technologies, to track compliance breaches.
Develop and introduce new AI export controls, targeting “semiconductor manufacturing sub-systems”. This would bring some of the specific component parts required for semiconductor development in scope of U.S. AI export control regulations.
Support and fund the evaluation of frontier AI systems for national security risks. This should be done via partnerships between frontier AI companies and NIST’s Center for AI Standards and Innovation (CAISI).
These policy recommendations are reinforced by Executive Order 14320: Promoting the Export of the American AI Technology Stack, which accompanied the AI Action Plan.
Key takeaways: The U.S. government has specifically criticised the various global AI governance initiatives of the past few years, implicating work progressed in forums like the UN, OECD, and the AI Safety Summit. However, what is not yet apparent is whether the U.S. has the support of its allies, particularly those in Europe, to advance and agree an alternative vision for international AI governance.
Why AI governance is still on the U.S. AI policy menu
It is understandable that much analysis of the AI Action Plan has focused on its deregulatory thrust. If the Plan is implemented, it is likely that various federal AI-related regulations and initiatives will be stripped back or eliminated, and states may be penalised for regulating AI. The overarching message is one of strong reluctance to impose or retain any regulatory obligations on the private sector that could restrict, slow, or otherwise impede AI development and innovation.
However, it is too simplistic to say that AI governance and responsible AI are no longer relevant in the U.S., even at the federal level. Although AI governance has been demoted from a main course to a side dish, it still plays an important role on the U.S. AI policy menu. This is for three reasons.
Firstly, the AI Action Plan itself highlights various AI-related risks that could slow innovation, by undermining the effectiveness and security of AI deployment. In doing so, the administration acknowledges that, at least in certain contexts, AI governance is critical as an enabler of innovation, as it is how we build trust and confidence in the reliability of AI.
AI risks flagged include interpretability, explainability, robustness, and misalignment, as well as the importance of reliable and standardised AI evaluations. These challenges are highlighted as limiting the potential use of AI in sensitive, safety-critical domains, like national security. To this end, new research, partnerships, and policy initiatives will be supported, to advance understanding of AI evaluations and safe deployment in high-impact domains.
The NIST AI Risk Management Framework (AI RMF) will remain in place (albeit with references to DEI and climate change removed). And NIST’s CAISI will play a key role in various AI governance and safety activities, such as evaluating specific federal AI systems and frontier AI models, updating AI incident response plans and frameworks, and developing AI standards (e.g., on AI assurance for the intelligence community).
Moreover, the significant and potentially disruptive impact that AI will have on the workforce is also acknowledged as an important risk. Various policy actions are proposed to protect impacted workers navigating the disruption, such as supporting re-skilling, further education, and AI literacy.
Secondly, two memoranda issued in April 2025 by OMB, which directly assists the President, require federal agencies to establish AI governance structures, policies, and processes. The memoranda are:
OMB M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust
OMB M-25-22: Driving Efficient Acquisition of AI in Government
My reading of the OMB memoranda, in conjunction with the AI Action Plan, is that the U.S. government does acknowledge the importance and necessity of AI governance for its own work. However, it does not want to impact or restrict private sector activities by mandating AI governance.
Nevertheless, for U.S. companies, the key takeaway of this should not be “let’s abandon AI governance”, it should actually be “let’s carefully consider the risks that even this extremely pro-AI government thinks are worth mitigating and refine our own internal AI governance framework accordingly”.
OMB requires federal agencies, inter alia, to maintain and publish an AI use case inventory, convene an AI Governance Board, develop a generative AI usage policy, and implement minimum risk-management practices for “high-impact AI”.
An AI use case or system is defined as “high-impact” when “its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety”. When agencies develop or use high-impact AI, they must conduct pre-deployment testing, complete an AI impact assessment, and implement ongoing monitoring and human oversight.
Federal agencies must also take seriously AI procurement due diligence and agree robust contracts with AI vendors that protect U.S. government data.
Indeed, OMB even goes as far as describing effective AI governance as “key to accelerated innovation as it [...] fosters accountability while reducing unnecessary barriers to AI adoption”.
The third and final reason why AI governance still matters in the U.S. is that, following the unsuccessful attempt to block states from enforcing their AI laws by imposing a 10-year moratorium, U.S. companies still have a complex patchwork of over 130 state AI laws to contend with.
The states with the most significant AI-related laws are California, Colorado, Maryland, Utah, and Virginia. Dozens of states have regulated AI in one way or another, in domains like privacy, recruitment, and healthcare. Even if nothing else is passed at the federal level, these state laws are significant for many companies. And there are also various existing federal laws of relevance for AI, such as on medical devices or fair credit.
Given the complexity of developing different internal AI governance frameworks for different jurisdictions, it makes much more sense to have a company-wide AI governance framework that promotes and facilitates compliance across all the important jurisdictions you operate in.
In sum, next time someone says “AI governance isn’t relevant in the U.S. anymore”, take it with a pinch of salt.
Ultimately, there is much more to AI governance than just regulatory compliance. It is about operationalising your ethical values and principles into practice, to drive trust and confidence in the high-risk and high-impact AI technologies you are developing, deploying, and using. You can't abandon this without consequences.
Global AI Governance in Conflict — “Governance by Design” vs. “Governance by Consequence”
I read your post on recent U.S. presidential directives concerning AI policy with interest and wonder what you make of this angle of approach. The directives have crystalized a deepening structural conflict between two incompatible approaches to AI governance — one emerging in the United States, the other in Canada, the UK, and the European Union. This conflict is not theoretical. It is operational, immediate, and of direct relevance to everyone’s cross-border activities.
REGULATORY PHILOSOPHIES IN CONFLICT
U.S. federal policy under the America’s AI Action Plan is built on the principle of 'govern by consequence.' The operating assumption is that AI development is a zero-sum race: whoever leads in speed and scale will dominate the global AI market for the foreseeable future. Regulatory structures that impose friction — including ESG or DEI-based constraints — are viewed as strategic liabilities. The U.S. model favors rapid deployment and relies bad enhanced civil and criminal enforcement to address bad actors, allowing the majority of the sector to move fast and win market share.
Canada, the UK, and the EU favor a 'govern by design' approach. AI systems, particularly high-impact or decision-making systems, are regulated throughout their lifecycle. This includes mandatory transparency, fairness audits, explainability, and pre-market government evaluation and approval. This approach embeds ESG and DEI principles into law as essential safeguards, not optional considerations - leaving product success for the market to decide.
PRACTICAL ILLUSTRATION: CREDIT RISK SCORING MODEL
Imagine a US bank licenses a U.S.-built AI credit scoring tool. It complies with U.S. federal requirements focused on speed to market including reduced upfront explainability or fairness audit requirements.
However, when evaluated for use in Canada, UK, or EU, the same system would fail to meet requirements like Canada’s OSFI B-13 Guideline, PIPEDA, or AIDA (pending). In general, Canadian, UK, and EU regulators expect evidence of bias mitigation, explainability, and accountability — and individuals have legal rights to meaningful explanations and challenge mechanisms under automated decision-making laws. ESG and DEI constraints are built in.
The reverse is also true. A system built for Canadian, UK, or EU compliance, including ESG-aligned fairness logic and bias audits, would be penalized under multiple U.S. state and federal laws that prohibit the use of DEI or ESG factors in consequential decision-making unless strictly tied to (for example) financial risk/return, especially in public procurement. What is mandatory in one system may be disqualifying in another.