The Digital Operational Resilience Act has been in force since January 2025. Most financial institutions have focused their DORA compliance on traditional ICT systems: core banking, payment processing, trading platforms. AI agents are not on the radar yet. They need to be.
DORA doesn't mention artificial intelligence by name. It doesn't need to. The regulation applies to "ICT systems" and "ICT services" broadly. An AI agent that processes customer data, makes decisions about financial products, or interacts with market infrastructure is an ICT system under DORA. The obligations apply regardless of whether the system uses machine learning, rule-based logic, or a large language model.
This article walks through the specific DORA requirements that apply to AI agents in financial services, and what they mean in practice.
ICT risk management (Chapter II)
DORA's Chapter II requires financial entities to have a comprehensive ICT risk management framework. Article 6 mandates that the framework cover "all ICT assets" and that entities "identify all sources of ICT risk."
For AI agents, this means:
Every AI agent must be included in the entity's ICT asset inventory. The risk assessment must cover the agent's access to data, its decision-making scope, its dependencies (model providers, tool APIs), and the potential impact of its failure or compromise.
Most financial institutions have not inventoried their AI agents. Some don't know how many they have. The agent built by the credit risk team to automate preliminary assessments. The one the compliance team uses to draft regulatory reports. The customer service bot that handles first-line queries. Each of these is an ICT asset under DORA and requires risk assessment.
Article 8 requires a "protection and prevention" framework including access controls. For AI agents, this means defining what each agent can access, enforcing those limits, and monitoring for violations. Broad API keys shared across agents do not satisfy this requirement.
ICT incident management (Chapter III)
Article 17 requires entities to establish processes for detecting, managing, and reporting ICT-related incidents. Article 19 mandates reporting of "major ICT-related incidents" to competent authorities.
For AI agents, the question is: what constitutes an incident?
A prompt injection that causes an agent to exfiltrate customer data is clearly an incident. But what about an agent that hallucinated a credit score? An agent that made a recommendation based on stale data? An agent that exceeded its authorized scope by querying a database it shouldn't have accessed?
These are judgment calls, and the regulation doesn't provide AI-specific guidance. What it does require is the ability to detect these events in the first place. You can't report an incident you didn't see.
Financial entities need monitoring capable of detecting anomalous AI agent behavior: scope violations, data access anomalies, unexpected tool calls, budget overruns, and pattern deviations. Detection must be timely enough to support incident classification and reporting within DORA's timelines.
DORA's reporting timelines are tight. Initial notification within 4 hours of classification. Intermediate report within 72 hours. Final report within one month. If your AI agent causes an incident at 3 AM on a Friday, you need detection systems that flag it before Monday morning.
Digital operational resilience testing (Chapter IV)
Article 24 requires regular testing of ICT systems. For entities that meet certain thresholds, Article 26 mandates threat-led penetration testing (TLPT).
For AI agents, operational resilience testing should include:
- Prompt injection testing. Can external inputs manipulate the agent into unauthorized actions? This is the AI equivalent of input validation testing.
- Failure mode testing. What happens when the model provider is unavailable? When the agent receives malformed data? When a tool API returns an error? Does the agent fail safely or does it proceed with incomplete information?
- Budget exhaustion testing. What happens when the agent's token budget is depleted mid-session? Does it degrade gracefully or does it crash?
- Scope boundary testing. Can the agent be induced to access data outside its authorized scope? To call tools it shouldn't have access to? To operate outside its time window?
Most organizations are not testing their AI agents for operational resilience. The testing frameworks for this don't exist yet in most enterprises. But DORA doesn't exempt systems because they're new. The testing obligation applies to all ICT systems that support critical or important functions.
Third-party risk management (Chapter V)
This is where it gets particularly interesting for AI agents. Chapter V requires financial entities to manage risks arising from their use of ICT third-party service providers. The obligations include contractual requirements, risk assessment, and monitoring.
For AI agents, the most significant third-party relationship is with the model provider. OpenAI, Anthropic, Google, Mistral. These are ICT service providers under DORA's definitions. The financial entity is using their service to process data and make decisions that affect customers.
The contractual arrangement with the model provider must address data security, audit rights, business continuity, and subcontracting. The entity must assess the provider's concentration risk and have exit strategies. Ongoing monitoring of the provider's performance and risk profile is required.
Article 28 requires that contracts with ICT third-party providers include provisions for data processing locations, audit access, and notification of security incidents. Most model provider agreements do not currently meet these requirements. The standard OpenAI or Anthropic terms of service are not designed for DORA compliance.
This creates a practical problem. Financial institutions using AI agents powered by external models need either to negotiate DORA-compliant contracts with model providers (which most providers are not yet structured to offer) or to use self-hosted models where the third-party risk is reduced to the infrastructure provider.
Article 29 addresses concentration risk. If multiple AI agents across the institution all depend on the same model provider, the failure or compromise of that provider affects all of them. DORA requires entities to assess and mitigate this concentration risk. In practice, this may require multi-provider strategies for critical AI systems.
Information sharing (Chapter VI)
Article 45 permits financial entities to share information about cyber threats. For AI agents, this includes sharing intelligence about prompt injection techniques, model vulnerabilities, and agent-specific attack patterns.
This is an area where the financial services sector could benefit significantly from collective intelligence. Prompt injection attacks that work against one agent are likely to work against similar agents at other institutions. Sharing attack patterns (without sharing proprietary data) could meaningfully improve the sector's resilience.
What this means practically
DORA's requirements for AI agents are not fundamentally different from its requirements for other ICT systems. The challenge is that AI agents have characteristics that make the standard compliance approach harder:
They're non-deterministic. The same input can produce different outputs. Testing must account for this variability. A test that passes once is not evidence that the system will behave the same way next time.
They're autonomousm. Traditional ICT systems execute predefined logic. AI agents decide what to do at runtime. Monitoring needs to be continuous and intelligent, not just periodic health checks.
Their supply chain is opaque. Model providers don't disclose training data, model weights, or internal safety measures in sufficient detail for a comprehensive risk assessment. Financial entities are dependent on systems they cannot fully inspect.
They evolve without notice. When a model provider updates their model, the behavior of every agent using that model changes. There's no change management process. No regression testing. The agent you tested last month is running on a different model this month.
These characteristics don't exempt AI agents from DORA. They make DORA compliance harder and more urgent. The operational risk from an ungoverned AI agent is potentially greater than from a traditional ICT system precisely because of the non-determinism and autonomy.
Financial institutions that are deploying AI agents have a narrow window to bring them under DORA governance. The regulation is already in force. The supervisory expectations are forming. The institutions that treat AI agents as ICT assets now, subject to the same inventory, risk assessment, monitoring, and testing requirements as every other critical system, will be ahead. The ones that treat AI as a special category that exists outside the operational resilience framework will eventually discover that regulators disagree.
DORA-ready AI governance
TapPass provides the monitoring, incident detection, and audit evidence that DORA requires for AI agents. See it in a financial services context.
Book a demo