The EU AI Act entered into force on August 1, 2024. The prohibited practices provisions applied from February 2, 2025. The high-risk obligations start applying on August 2, 2027. That is 15 months from the date of this article. The clock is not starting. It is already running.
Most organizations I talk to are in one of three states. They have read summaries of the regulation and decided it probably applies to them but haven't started compliance work. They have started a compliance project focused on documentation and risk classification but haven't changed their technical infrastructure. Or they haven't read the regulation at all and are hoping it will be clarified before it matters.
None of these positions is comfortable. Here is a quarter-by-quarter timeline of what needs to happen between now and August 2027, specifically for organizations deploying AI agents.
What has already happened
Two important deadlines have already passed.
February 2, 2025: The prohibition on certain AI practices took effect. Article 5 prohibits AI systems that deploy subliminal techniques, exploit vulnerabilities of specific groups, perform social scoring, or use real-time remote biometric identification in public spaces (with limited exceptions for law enforcement). If any of your AI agents engage in these practices, you are already in violation.
February 2, 2025: The AI literacy obligation (Article 4) also took effect. Organizations deploying AI systems must ensure that staff dealing with AI have "a sufficient level of AI literacy." This is a broad requirement. It applies to everyone from the board to the operations team. The interpretation is still forming, but the obligation is active.
If you haven't addressed these two requirements, they are already overdue.
Q2 2026: Inventory and classify
Priority: Know what you have
- Complete an inventory of all AI systems and agents in operation
- Classify each system against Annex III (high-risk categories)
- Identify which agents process personal data (GDPR intersection)
- Identify which agents operate in regulated sectors (DORA, MiFID II, Solvency II)
- Map third-party dependencies (model providers, tool APIs, data sources)
- Assign an owner to each AI system for compliance purposes
The inventory is the foundation. Every subsequent step depends on knowing what you have. The most common failure mode is not that organizations refuse to inventory their AI systems. It is that they don't know about all of them. Shadow AI is not a hypothetical risk. It is the normal state of affairs in organizations where AI tools are accessible and useful.
Classification against Annex III is the critical decision. High-risk classification triggers the full weight of Chapter III, Section 2: risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Not-high-risk classification means lighter obligations, primarily transparency and AI literacy.
Many AI agents in financial services, HR, education, and critical infrastructure will likely qualify as high-risk. The classification should be conservative. The cost of incorrectly classifying a system as not-high-risk and later being found non-compliant is significantly greater than the cost of applying high-risk controls to a system that might not strictly require them.
Q3 2026: Risk management and technical infrastructure
Priority: Build the compliance infrastructure
- Establish the risk management system required by Article 9
- Deploy monitoring and logging infrastructure for high-risk AI agents
- Implement data governance measures (Article 10): data quality, bias testing, representativeness
- Design and test human oversight mechanisms (Article 14)
- Begin technical documentation (Article 11) for each high-risk system
- Assess model provider contracts for Article 25 obligations
This is the most engineering-intensive quarter. The risk management system (Article 9) requires continuous, iterative identification and analysis of risks, estimation and evaluation of those risks, and adoption of risk management measures. For AI agents, risk identification must cover prompt injection, data leakage, hallucination, scope violations, and adversarial manipulation. The risk management system must be documented and maintained throughout the AI system's lifecycle.
The monitoring infrastructure is not optional for high-risk systems. Article 9(2)(a) requires that the risk management system include "post-market monitoring." Article 72 reinforces this with specific post-market monitoring obligations. For AI agents, this means runtime monitoring of every agent classified as high-risk: what it does, what data it accesses, what decisions it makes, and how its behavior changes over time.
If you don't have this infrastructure by Q3 2026, you are building it under time pressure in the quarters that follow. That is a position you don't want to be in.
Q4 2026: Testing and documentation
Priority: Validate and document
- Conduct conformity assessments for high-risk systems (Article 43)
- Complete technical documentation to Annex IV specifications
- Test human oversight mechanisms under realistic conditions
- Validate that logging meets Article 12 requirements (automatic recording of events)
- Run accuracy and robustness testing (Article 15)
- Establish the post-market monitoring plan
Technical documentation under Annex IV is extensive. It must include a general description of the AI system, a detailed description of the elements and development process, information about monitoring and functioning, a description of the risk management system, a description of changes throughout the lifecycle, the list of harmonised standards applied, the EU declaration of conformity, and a detailed description of the system for evaluating performance in the post-market phase.
For AI agents, several of these requirements are harder than for traditional AI systems. The "detailed description of the development process" for a system that uses a third-party language model is complicated by the fact that you did not develop the model. The "description of changes throughout the lifecycle" is complicated by the fact that the model provider updates the model without your involvement. These are not unsolvable problems, but they require careful documentation of what you know, what you don't know, and how you manage the uncertainty.
Q1 2027: Dry run
Priority: Operate as if enforcement has started
- Run all high-risk AI systems under full compliance controls
- Practice incident reporting workflows
- Conduct internal audits of documentation completeness
- Test the serious incident reporting process (Article 73)
- Verify that transparency obligations are met (users know they're interacting with AI)
- Address gaps identified during conformity assessment
The dry run quarter is about finding problems while the cost of fixing them is low. Every compliance program discovers gaps during implementation. The question is whether you discover them six months before enforcement (when they are fixable) or six months after enforcement (when they are violations).
Article 73 requires reporting of "serious incidents" to market surveillance authorities. The definition of a serious incident is broad: any incident that leads to death, serious damage to health, serious and irreversible disruption of critical infrastructure, breach of fundamental rights obligations, or serious damage to property or the environment. For AI agents, an incident where an agent makes a consequential decision based on hallucinated data, or where an agent accesses data outside its authorized scope, could potentially qualify depending on the impact.
Q2 2027: Final preparation
Priority: Close remaining gaps
- Complete any outstanding remediation from Q1 audits
- Prepare the EU declaration of conformity (Article 47)
- Ensure registration in the EU database (Article 71) where required
- Brief the board on compliance status and residual risks
- Verify that all deployer obligations (Article 26) are met
- Confirm contractual arrangements with model providers address Article 25
What this timeline assumes
This timeline assumes you start now. It assumes you have some AI governance infrastructure already in place or can deploy it quickly. It assumes you have a team that can dedicate meaningful time to compliance work. And it assumes the regulatory environment doesn't change significantly between now and August 2027.
None of these assumptions is guaranteed. The European Commission is still developing implementing acts and harmonised standards. The European AI Office is still forming its supervisory approach. National market surveillance authorities are still building capacity. There will be clarifications, interpretations, and potentially delays.
But the regulation is in force. The text is final. The obligations for high-risk AI systems are specific and detailed. Waiting for perfect clarity before starting work is a strategy that leads to non-compliance. Starting now, with the understanding that some details may evolve, is the prudent path.
The organizations that will be compliant on August 2, 2027, are the ones that start in Q2 2026. Not Q4 2026. Not Q1 2027. Now.
Fifteen months is less time than it sounds. An AI governance program requires inventory, classification, risk assessment, technical infrastructure, documentation, testing, and organizational change. Each of these takes longer than expected. Starting now is not early. It is necessary.
Start your EU AI Act compliance
TapPass provides the runtime monitoring, audit logging, and policy enforcement that Articles 9, 12, and 14 require. See how it maps to your compliance timeline.
Book a demo