In 2010, CISOs were asking: "How do I govern cloud services that my teams are adopting faster than I can evaluate them?" In 2026, the same question is being asked about AI agents. The parallels are instructive because we already know how the cloud story played out.

The cloud security journey took about 15 years. It started with denial ("we don't use cloud"), moved through panic ("we need to block cloud"), evolved to pragmatism ("we need to govern cloud"), and arrived at maturity ("cloud governance is just governance"). AI agent governance is following the same trajectory, compressed into a shorter timeline because the technology moves faster and the regulatory environment is less patient.

Here are six specific parallels and what they tell us about what comes next for AI governance.

Parallel 1: Shadow adoption

In 2011, the average enterprise had 461 cloud services in use. The IT department knew about 51 of them. The rest were shadow IT: teams signing up for SaaS tools with a credit card because procurement was too slow and IT's answer was always "no."

The same pattern is playing out with AI agents. Developers are connecting agents to production systems using personal API keys. Business teams are building workflows with no-code AI tools. Data science teams are deploying models as agents without involving security. The CISO's inventory shows 5 sanctioned AI agents. The real number is probably 50.

Cloud security solved this with Cloud Access Security Brokers (CASBs): proxy layers that discovered, monitored, and controlled cloud usage. The CASB didn't replace the cloud services. It sat between the organization and the services, providing visibility and enforcement without blocking adoption.

AI agent governance needs the equivalent: a proxy layer between agents and model providers that discovers, monitors, and controls agent activity. Not a blocker. A broker.

Parallel 2: Shared credentials

Early cloud adoption was plagued by shared accounts. Teams shared login credentials for AWS consoles. Multiple people used the same root account. API keys were committed to source code repositories. The 2019 Capital One breach, which exposed 100 million records, was fundamentally a credential management failure: an IAM role was over-permissioned, and the attacker exploited that excessive access.

AI agents today operate with shared API keys. Multiple agents use the same OpenAI or Anthropic key. The key has full access to every model, every feature, every capability. There is no per-agent scoping. Revoking one agent's access means revoking everyone's access. The credential management problem of early cloud is repeating itself exactly.

Cloud security solved this with IAM: individual identities, scoped permissions, short-lived credentials, and the principle of least privilege. AWS IAM launched in 2010. It took years for organizations to adopt it properly (many still haven't), but the architecture was clear. Each workload gets its own identity. Each identity gets only the permissions it needs.

AI agents need IAM. Per-agent identity. Per-agent permissions. Short-lived credentials. Least privilege enforced at runtime, not documented in a wiki.

Parallel 3: Missing audit trails

AWS CloudTrail launched in 2013. Before CloudTrail, organizations running workloads on AWS had limited visibility into API activity. They could see their bill but couldn't tell you which IAM role called which API at what time. Security investigations were forensic exercises in log correlation.

AI agents in 2026 are at the pre-CloudTrail stage. Most organizations know how much they spend on model API calls. Few can tell you which agent called which model, with what parameters, at what time, and what the agent did with the response. The audit trail is the bill.

Cloud security solved this by making API-level logging a standard capability. Every API call is logged, structured, queryable, and retained. The expectation became that infrastructure is observable by default, not as an add-on.

AI agents need the same expectation. Every model call, every tool invocation, every data access should be logged by default. Not by the developer (who will forget). By the infrastructure.

Parallel 4: Post-hoc compliance

Early cloud compliance was documentation theater. An auditor would ask "where is your data stored?" and the team would scramble to figure out which AWS regions their S3 buckets were in. Compliance was a retroactive reconstruction of what had happened, not a real-time property of the system.

AI compliance today is the same kind of theater. Teams are writing AI policies that describe how agents should behave. The agents don't read the policies. The policies describe an ideal state that nobody verifies at runtime. When an auditor asks "what data does this agent access?", the answer is a policy document, not a live dashboard.

Cloud security solved this with policy-as-code and continuous compliance monitoring. Tools like AWS Config, Azure Policy, and Open Policy Agent shifted compliance from documentation to enforcement. Instead of writing a policy and hoping it was followed, the policy was expressed as code and evaluated continuously against the actual state of the infrastructure. Non-compliant resources were flagged or blocked automatically.

AI governance needs policy-as-code. Agent permissions expressed as machine-readable policy, evaluated on every request, with violations blocked or flagged in real time. Compliance becomes a property of the system, not a separate project.

Parallel 5: The identity perimeter shift

The most significant conceptual shift in cloud security was the move from network-centric to identity-centric security. In the data center, the perimeter was the firewall. Inside the network was trusted. Outside was untrusted. Cloud dissolved that perimeter. Workloads were running on shared infrastructure across the internet. The firewall was no longer the boundary.

Zero trust emerged as the response: verify every request, regardless of where it comes from. The identity of the requester, not its network location, determines access. Google's BeyondCorp paper (2014) formalized what many were already discovering: the network is not the security boundary. Identity is.

AI agents are going through the same perimeter dissolution. There is no "inside the network" for an agent that calls OpenAI's API, queries a SaaS database, and writes to a cloud storage bucket. The agent's identity, not its network location, must determine what it can do. Every request must be verified. Every action must be authorized. The network perimeter is irrelevant.

Parallel 6: The governance market

The cloud security market followed a predictable sequence. First came the infrastructure (AWS, Azure, GCP). Then came the early adopters who built without security. Then came the breaches that demonstrated the risk. Then came the governance tools (CASBs, CSPM, CWPP, CNAPP). Then came the regulations (GDPR, PCI DSS updates for cloud). Then came maturity.

The AI governance market is compressing this sequence. The infrastructure (model providers, agent frameworks) arrived in 2022-2024. The early adopters are deploying now. The breaches are beginning. The governance tools are emerging. The regulations (EU AI Act) are already in force, ahead of the maturity curve. This is unusual. It means organizations face regulatory obligations before governance tooling has matured. The gap between regulatory requirements and available tools is where the risk concentrates.

The cloud security market took 10 years to mature. AI governance has about 2 years before regulatory enforcement makes the gap painful.

What cloud got right eventually

Cloud security arrived at a set of principles that AI governance should adopt from the start, rather than rediscovering them over a decade:


Every significant technology shift creates a governance gap. The gap between adoption speed and governance maturity. Cloud created one that took a decade to close. AI is creating one that regulatory pressure will compress into two or three years. The good news is that we don't need to invent the governance playbook. We already wrote it for cloud. The principles are the same: identity, least privilege, observability, policy-as-code, continuous compliance. The application is different. The lesson is available.

Apply the cloud playbook to AI

TapPass is the CASB for AI agents. Identity, monitoring, policy enforcement, and audit trails. See how it works on your infrastructure.

Book a demo