Discussion about this post

User's avatar
Art Linton's avatar

Global AI Governance in Conflict — “Governance by Design” vs. “Governance by Consequence”

I read your post on recent U.S. presidential directives concerning AI policy with interest and wonder what you make of this angle of approach. The directives have crystalized a deepening structural conflict between two incompatible approaches to AI governance — one emerging in the United States, the other in Canada, the UK, and the European Union. This conflict is not theoretical. It is operational, immediate, and of direct relevance to everyone’s cross-border activities.

REGULATORY PHILOSOPHIES IN CONFLICT

U.S. federal policy under the America’s AI Action Plan is built on the principle of 'govern by consequence.' The operating assumption is that AI development is a zero-sum race: whoever leads in speed and scale will dominate the global AI market for the foreseeable future. Regulatory structures that impose friction — including ESG or DEI-based constraints — are viewed as strategic liabilities. The U.S. model favors rapid deployment and relies bad enhanced civil and criminal enforcement to address bad actors, allowing the majority of the sector to move fast and win market share.

Canada, the UK, and the EU favor a 'govern by design' approach. AI systems, particularly high-impact or decision-making systems, are regulated throughout their lifecycle. This includes mandatory transparency, fairness audits, explainability, and pre-market government evaluation and approval. This approach embeds ESG and DEI principles into law as essential safeguards, not optional considerations - leaving product success for the market to decide.

PRACTICAL ILLUSTRATION: CREDIT RISK SCORING MODEL

Imagine a US bank licenses a U.S.-built AI credit scoring tool. It complies with U.S. federal requirements focused on speed to market including reduced upfront explainability or fairness audit requirements.

However, when evaluated for use in Canada, UK, or EU, the same system would fail to meet requirements like Canada’s OSFI B-13 Guideline, PIPEDA, or AIDA (pending). In general, Canadian, UK, and EU regulators expect evidence of bias mitigation, explainability, and accountability — and individuals have legal rights to meaningful explanations and challenge mechanisms under automated decision-making laws. ESG and DEI constraints are built in.

The reverse is also true. A system built for Canadian, UK, or EU compliance, including ESG-aligned fairness logic and bias audits, would be penalized under multiple U.S. state and federal laws that prohibit the use of DEI or ESG factors in consequential decision-making unless strictly tied to (for example) financial risk/return, especially in public procurement. What is mandatory in one system may be disqualifying in another.

Expand full comment

No posts