How could the EU AI Act change?
Explaining the Commission's proposed amendments | #33
Hey đ
Iâm Oliver Patel, author and creator of Enterprise AI Governance.
On Wednesday 19 November 2025, the European Commission unveiled its Digital Omnibus Package, proposing targeted yet impactful amendments to the EU AI Act. This article distils what could change, why it matters for enterprise AI governance practitioners, and what to watch as trilogue negotiations begin.
If you value my work and want to learn more about the EU AI Act and AI governance implementation, sign up to secure a 25% discount for my forthcoming book, Fundamentals of AI Governance (2026).
What are the most important proposed EU AI Act changes?
On Wednesday 19 November 2025, the European Commission (henceforth the Commission) announced proposed changes to the AI Act. These changes are presented as âinnovation-friendly AI rulesâ that will âreduce compliance costs for businessesâ. It did so by publishing a proposal for a new regulation. The purpose of this proposed regulation is to âsimplifyâ the AI Act with targeted yet meaningful amendments.
This is part of the Commissionâs broader âDigital Packageâ, which is a major programme of work aiming to âsimplify EU digital rules and boost innovationâ. The Digital Packageâwhich encompasses the âDigital Omnibus Regulationââincludes proposals to amend flagship digital laws like the AI Act, the GDPR, the ePrivacy Directive, the Data Act, and the NIS 2 Directive. Specifically, the Commission simultaneously published proposals for two regulations (so itâs not really an âomnibusâ anymore):
Proposal for Regulation on simplification of AI rules (which covers the AI Act amendments); and
Proposal for Regulation on simplification of the digital legislation (which covers the amendments to the other EU digital laws mentioned above).
This article explains and analyses the six most important AI Act amendments that enterprise AI governance professionals need to understand. These are:
Timeline changes for high-risk AI system compliance.
Timeline changes for transparency-requiring AI system compliance.
Limiting registration in the public EU database for high-risk AI systems.
Softening of the AI literacy obligation.
Expanding the scope of the European AI Officeâs regulatory powers.
Proportionality for small mid-cap (SMC) enterprises.
For each of these six proposed amendments, I explain what is in the law today, what changes are being proposed, and what the impact of these changes would be.
Earlier this week, I published an article on Enterprise AI Governance that explains how we got to this point and why the EU is now doing this. It provides a detailed account of the background context to AI Act simplification, highlighting how the âDraghi reportââwhich argued that digital regulatory burdens are impeding European growth and competitivenessâhas influenced the Commissionâs proposals.
It also outlines three important caveats on the EUâs legislative process that are worth repeating:
This merely represents the proposal of one EU institution (the Commission). Such amendments of EU law require formal approval from both the European Parliament and the EU member states via the Council of the EU (the Council).
Therefore, this proposal will now be followed by lengthy and potentially fraught trilogue negotiations between the Commission, European Parliament, and Council.
Finally, it is impossible to predict what the final legislative text will consist of, how long the negotiation and approval process will take, and whether approval to amend the AI Act will ultimately be agreed on and enacted.
Scope of this article: this is not an exhaustive analysis of the entire AI Act simplification proposal and it does not cover every proposed amendment in the Commissionâs 65-page document. Rather, it focuses on the six proposed changes that would be most consequential (if passed) for enterprises implementing AI governance. Also, it intentionally does not cover the proposed changes to the GDPR, nor the AI Act amendments that are directly related to the processing of personal data (e.g., use of sensitive personal data for bias mitigation), as this topic will be addressed in a future article on Enterprise AI Governance.
Disclaimer: this article is not intended to be legal advice and must not be relied upon or used in that way. Always consult a qualified legal professional.
1. Timeline changes for high-risk AI system compliance
What is in law today?
The key provisions relating to high-risk AI systems apply from 2 August 2026. This means that from 2 August 2026, unless there is a change in the law, providers and deployers must adhere to the obligations and requirements for high-risk AI systems and can be subject to investigations and penalties for non-compliance. However, this applicable date only applies to high-risk AI systems listed in Annex III (e.g., education, employment, administration of justice etc.) that are placed on the market or put into service from 2 August 2026 onwards.
For such Annex III high-risk AI systems that were placed on the market or put into service before 2 August 2026, providers and deployers are only subject to AI Act obligations and requirements if, from that date onwards, there is a significant change in design or intended purpose of the AI system. Furthermore, for high-risk AI systems that are products, or safety components of products, regulated by specific EU product safety laws listed in Annex I, the applicable date is 2 August 2027.
What changes are being proposed?
If you are asked âwhen do the compliance obligations for high-risk AI systems apply?â, your answer now has to be âit dependsâ.
Given the nature of the proposed amendments, there are various potential scenarios. The Commission is seeking to link the applicability of high-risk AI system provisions with the availability of technical standards and associated support tools. However, if these artefacts are not approved and available within a certain timeframe, there is a backstop date, which represents the latest applicable date.
Under this proposal there are, broadly speaking, three potential scenarios for when most of the provisions relating to high-risk AI systems may apply (covering high-risk AI system classification, development requirements, and obligations of providers, deployers, and other parties):
Scenario 1. If technical standards and associated support tools for high-risk AI system compliance are finalised and approved by the Commission, then the applicable compliance date will be six months after this approval (for high-risk AI systems listed in Annex III) and 12 months after this approval (for high-risk AI systems that are products, or safety components of products, regulated by an EU law listed in Annex I).
Scenario 2. However, if technical standards and associated support tools for high-risk AI system compliance are not finalised or approved by the Commission in time (i.e., before the dates below), then the applicable compliance dates will be 2 December 2027 (for high-risk AI systems listed in Annex III) and 2 August 2028 (for high-risk AI systems that are products, or safety components of products, regulated by an EU law listed in Annex I).
Scenario 3. Given that the applicable date in law today is 2 August 2026, if these amendments are not approved and enacted before this date, then the provisions relating to high-risk AI systems will, technically speaking, apply from then. This creates timeline pressure to get these changes approved within the next few months.
What impact would these changes have?
To clarify, 2 December 2027 and 2 August 2028 are the backstop dates for high-risk AI system compliance. To reinforce this point, the Commission has explained that the grace period will be up to sixteen months (referring to the time between 2 August 2026 and 2 December 2027). This means that if technical standards come too late (i.e., after 2 June 2027, which is six months before 2 December 2027) these backstop dates will apply.
These amendments shine the spotlight on the ongoing work being led by CEN/CENELEC to agree and publish technical standards. They also highlight the importance that the Commission places on these artefacts to support organisations and facilitate compliance.
However, even if the applicable date may be delayedâgiving providers and deployers more time to prepareâthis extra time has been achieved at the expense of certainty, with organisations now not knowing what the applicable date will be.
2. Timeline changes for transparency-requiring AI system compliance
What is in law today?
Article 50 of the AI Act outlines transparency obligations for providers and deployers of certain AI systems. Article 50 covers obligations relating to disclosure, informing end users about the use of AI, labelling certain deep fake content, and detectability of AI system outputs. Specifically, Article 50(2) stipulates that:
âproviders of AI systems, including general-purpose AI systems, that generate synthetic audio, image, video, or text content shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulatedâ.
Currently, this specific obligation applies from 2 August 2026. This compliance date applies to all AI systems, irrespective of whether they are placed on the market or put into service before or after 2 August 2026.
What changes are being proposed?
The Commission proposes to push back the applicable date for this specific transparency obligation to 2 February 2027 for providers of AI systems that have been placed on the market before 2 August 2026. This proposed six month delay to the applicable date only applies to obligation stipulated in Article 50(2) (on AI system output machine readability and detectability) and not the other transparency obligations outlined in Article 50.
What impact would these changes have?
This change would give providers of AI systems that have already been placed on the market or put into service, or that will be before 2 August 2026, and that generate synthetic audio, image, video, or text content (i.e., most generative AI systems), an additional six months to ensure that these AI systems are developed in such a way that ensures the outputs they generate are detectable as AI-generated.
Although this is a relatively short delay, it is nonetheless an acknowledgement by the Commission of the technical and engineering challenges providers face in developing or modifying their AI systems to adhere to this obligation. However, any AI systems placed on the market on or after 2 August 2026 will have to comply with this obligation from their release date.
3. Limiting registration in the EU public database for high-risk AI systems
What is in law today?
Annex III of the AI Act lists eight categories of high-risk AI system, including law enforcement (#6), education and vocational training (#3), and employment, workersâ management and access to self-employment (#5). However, there are classification rules which mean that just because an AI system is intended for use or used in one of these domains does not necessarily mean it is a high-risk AI system.
AI systems listed in Annex III are not considered high-risk if it is demonstrated that they do not pose significant risk of harm to health, safety, or fundamental rights. For example, if the AI system does not materially influence decisions or is only used for a narrow procedural task, the provider is entitled to demonstrate, based on a documented assessment, that it is not a high-risk AI system. This derogation, including the conditions to fulfil it, is outlined in Article 6(3) and only applies to AI systems listed in Annex III.
Providers must register high-risk AI systems listed in Annex III in the EU public database for high-risk AI systems, before those AI systems are placed on the market or put into service. Interestingly, this registration obligation also includes AI systems that the provider has concluded are not high-risk via the derogation procedure outlined in Article 6(3).
What changes are being proposed?
The Commission proposes to limit the scope of this registration obligation so that it no longer applies to AI systems that providers have concluded are not high-risk via the Article 6(3) derogation procedure. Simply put, where a provider has assessed and documented that an AI system used in an Annex III domain is not high-risk, the provider will not have to register that AI system in the EU public database for high-risk AI systems.
However, although providers can make this assessment independently and do not require any external approval (e.g., from the AI Office or market surveillance authority), providers will still be obliged to share the documentation of the assessment, containing the justification and supporting evidence, upon request from a regulator.
What impact would this have?
This may seem like a subtle change at first glance, but it would be consequential for organisations using AI at scale.
Most enterprise AI governance practitioners likely raised their eyebrows when they realised that every AI system used in a high-risk domain, including AI systems that are not high-risk due to their use for mere assistive, procedural, or preparatory tasks, would have to be registered. This will be difficult (or perhaps near impossible) to keep track of and implement, due to the increasingly ubiquitous use of AI to support and augment workflows across virtually all domains of enterprise activity. Indeed, the Commission describes this current registration obligation as a âdisproportionate compliance burdenâ.
Therefore, the most obvious impacts of this change would likely be far fewer AI systems registered in the EU public database for high-risk AI systems and reduced administrative overheads for organisations developing and deploying AI systems.
However, it would also reduce public transparency regarding which AI systems providers deem not to be high-risk and how they have made such determinations. This could incentivise some providers to take a more expansive approach to interpreting Article 6(3) and determining what is not a high-risk AI system, as they may reasonably judge that the risk of doing so (and being penalised for getting it wrong) is lower with significantly less public scrutiny.
4. Softening of the AI literacy obligation
What is in law today?
Under Article 4 of the AI Act providers and deployers of AI systems are obliged to implement âAI literacyâ. This is one of the most important aspects of the law, because it has contributed to many organisations in the EU and further afield rolling out AI training and upskilling initiatives for their workforce. Specifically, Article 4 requires organisations to ensure that âstaff and other persons dealing with the operation and use of AI systemsâ have a âsufficient level of AI literacyâ.
In practice, given that all staff in modern organisations can use AI systems (e.g., ChatGPT or Gemini), a reasonable interpretation of Article 4 is that all of these staff should receive some form of AI-focused training. The AI literacy obligation has been applicable since February 2025. However, there are no enforcement penalties for non-compliance with it. Although non-compliance could be taken into account during enforcement investigations or proceedings relating to other aspects of non-compliance.
What changes are being proposed?
The Commission proposes to remove the obligation for providers and deployers to implement AI literacy. Rather than providers and deployers being legally required to ensure their staff operating and using AI systems have sufficient levels of AI literacy, the Commission and Member States will be required to foster AI literacy and âencourage providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacyâ. The Commission has alluded to the fact that the ambiguity of the current âunspecified obligationâ has caused issues for businessesâespecially smaller firms.
What impact would this have?
This change would be significant because it would remove the broad and expansive legal obligation for companies to implement AI literacy. However, human oversight of high-risk AI systems must still be assigned to staff with sufficient training and competence. Therefore, ensuring AI literacy is still required in that context.
Moreover, it will be practically impossible for any organisation to comply with the AI Actâor to manage AI risks, implement AI at scale, and maximise the value of AIâwithout educational and training initiatives focused on AI. Therefore, forward-thinking enterprises are unlikely to abandon their AI literacy programmes because it is no longer a legal requirement. However, certain initiatives may be scaled back, deprioritised, or change in focus or scope.
5. Expanding the scope of the European AI Officeâs regulatory powers
What is in law today?
Here is a simplified summary of the (rather complex) AI Act governance regime:
There are governance and regulatory bodies at both the EU and member state level.
At the EU level, the most important bodies are the European AI Office (which is part of the Commission) and the European AI Board.
At the member state level, the most important bodies are the market surveillance authorities. They are responsible for monitoring, investigations, and enforcement of the AI Act. There will potentially be several in each EU member state.
The AI Office is responsible for overseeing and enforcing the provisions on general-purpose AI models, whereas the market surveillance authorities are responsible for overseeing and enforcing the provisions on AI systems (e.g., high-risk and transparency-requiring AI systems), as well as most other AI Act provisions.
What changes are being proposed?
The Commission proposes to âcentralise oversight over a large number of AI systems built on general-purpose AI modelsâ when the same provider develops both the general-purpose AI model and the AI system.
The proposed amendments to Article 75 would render the AI Office as the body responsible for monitoring and supervising compliance of AI systems that leverage general-purpose AI models. However, this would only apply when the general-purpose AI model and the AI system are developed and placed on the market or put into service by the same provider. In such scenarios, the AI Office would be âexclusively competentâ, which means that the market surveillance authorities in the respective EU member states would no longer have a supervisory role. The AI Office would also have âall the powers of a market surveillance authorityâ.
This expansion in scope of the AI Officeâs responsibilities does not apply to high-risk AI systems covered by an Annex I EU product safety law. Therefore, this change primarily impacts high-risk AI systems listed in Annex III and transparency-requiring AI systems regulated by Article 50 (where such AI systems leverage general-purpose AI models).
Finally, under this proposal, the AI Office would also have exclusive competence as the regulator of âAI systems that constitute or that are integrated into a designated very large online platform or very large online search engineâ (as defined and regulated by the Digital Services Act).
What impact would this have?
The implied rationale behind this change is that the Commission does not think it makes sense for the AI Office to be responsible for overseeing providers of general-purpose AI models but not the AI systems developed, made available, and put into service by those same providers.
These changes would make the AI Office the supervisory authority for many of the most widely used AI systems worldwide. This is because most mainstream generative AI platforms, such as ChatGPT, Gemini, Claude, Grok, and Microsoft Copilot, are AI systems built on general-purpose AI models, with the same organisation being the provider of both the AI model and the AI system. Therefore, this would represent a meaningful increase in the relevance and prominence of the AI Office for AI Act oversight and enforcement, and it would also enable the AI Office to pursue investigations and enforcement action relevant for both general-purpose AI model and AI system compliance in a coordinated manner.
This would also mean that certain AI system providers would not be subject to regulatory investigations and enforcement action across multiple EU member states, which is possible if the law isnât amended.
6. Proportionality for small mid-cap (SMC) enterprises
The AI Act provides an element of flexibility and proportionality for micro, small, and medium-size enterprises (SMEs), including start-ups. For example, the compliance penalties which SMEs can face are capped in the following way:
35 million EUR or 7% of total worldwide annual turnover (whichever is lower).
15 million EUR or 3% of total worldwide annual turnover (whichever is lower).
7.5 million EUR or 1% of total worldwide annual turnover (whichever is lower).
This contrasts with the âwhichever is higherâ penalty logic that applies for all other businesses (i.e., those which are not SMEs). In practice, this means that many non-SME businesses could face potential penalties into the billions of euros, whereas penalties for SMEs will always be capped as per the above.
Other ways in which the AI Act seeks to ease compliance burdens for SMEs include allowing SME providers of high-risk AI systems to provide the required technical documentation in a simplified way and providing SMEs with free access to AI regulatory sandboxes.
What changes are being proposed?
The first proposed change is to add legal definitions of SME and small mid-cap enterprise (SMC) to the AI Act. These are:
SME: an enterprise which employs fewer than 250 people and which has an annual turnover not exceeding 50 million EUR, and/or an annual balance sheet total not exceeding 43 million EUR.
SMC: an enterprise which employs fewer than 750 people and which has an annual turnover not exceeding 150m EUR or an annual balance sheet total not exceeding 129m EUR.
The second and more significant proposed change is to extend the flexibility and proportionality penalties afforded to SMEs to SMCs also. This means that SMCs would benefit from the same capped enforcement penalty regime as SMEs, significantly reducing their total potential penalty exposure in certain circumstances. SMCs that are providers of high-risk AI systems would also be able to provide the required technical documentation in a simplified manner.
What impact would this have?
Core threads from the Draghi report are woven throughout this proposal. The Commission will be hoping that easing regulatory compliance burdens and softening the enforcement environment for a larger pool of companies will make it easier for EU digital start-ups and scale-ups to grow, innovate, and compete internationally. Indeed, the Draghi report argued that âregulatory burdensâ are particularly damaging for digital sector SMEs trying to rapidly scale up.




Great analysis, thanks!