The Agent is Inside the Perimeter: A Case for Enterprise-Wide AI Agent Governance
- Ayesha Dissanayaka
- Associate Director/Architect , WSO2
When Vercel disclosed in April 2026 that an intrusion into its internal systems began with the compromise of a third-party AI agent platform connected to an employee's Google Workspace account, it crystallized a risk many security leaders had been tracking but few had operationalized. The attacker did not phish an engineer. They did not defeat MFA. They compromised an AI tool the employee had quietly connected months earlier, and rode its existing OAuth grants into the enterprise.
The incident is instructive, but it is not the point. The point is that this class of breach is now architecturally inevitable in every enterprise that has allowed AI agents to proliferate without governance. And proliferate they have: Gartner forecasts that 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025. The question for security leadership is no longer whether agents will become the pivot point in a serious incident. It is whether the controls, visibility, and organizational posture will be in place to contain one when they do.
The threat model has shifted, and most programs have not
For two decades, enterprise security has been oriented around two principal types: humans, governed through identity providers, MFA, and conditional access; and machines, governed through service accounts tied to infrastructure. Agents are neither. They are invoked by humans, act on humans' behalf, hold standing OAuth grants across SaaS boundaries, and, uniquely, are reprogrammable at runtime by whoever can reach their prompt interface.
This is a new identity class with a new blast radius (OWASP, 2026), and it has entered the enterprise through the side door. A single employee, acting in good faith, can in five minutes grant a third-party AI platform read access to the corporate mailbox, write access to a code repository, and deployment rights to production. The consent screen looks like every other OAuth dialog that employee has ever clicked through. The control plane your security team built (the IdP, the CASB, the DLP, the SIEM) largely does not see what just happened.
Agent governance is mandatory, and it must be organization-wide
The default posture in most enterprises today is tool-by-tool, employee-by-employee, consent-by-consent. That posture does not scale, and it does not survive a serious audit. Agent governance has to be pulled up to the same level as identity governance generally: a centrally administered program, with a registry of every approved agent, clearly assigned ownership, defined lifecycles, and a mechanism to revoke access across the organization within minutes of a vendor incident disclosure.
This is not only a technology problem. Procurement needs to evaluate agent platforms against a standard security bar. Legal needs data processing terms that anticipate agent behavior, not classic SaaS data flows. Engineering needs paved-road patterns so that teams do not reach for shadow integrations to get their work done. And the security function needs the mandate to say no. More often, it needs the mandate to say yes only with enumerated controls. Without that mandate, agent adoption will continue to outrun agent governance.
“Only 37% of organizations have AI governance policies in place as of 2025, and shadow AI now adds an average of $670K to the cost of a breach - (IBM, 2025).”
Employee education is load-bearing, not optional
Technical controls will not close this gap alone. The people clicking Allow on agent consent screens are almost never the people who will be on the bridge call when the agent platform is breached. They do not read the scopes. They do not think of an AI assistant as a privileged identity. They think of it as a productivity tool, and they are rewarded, implicitly, for adopting it quickly.
Every employee who can authorize a SaaS integration is now, in effect, provisioning a workload identity inside the enterprise. That framing has to make it into security awareness training, and it has to be reinforced with friction at the point of decision: a sanctioned agent catalog, a clear approval workflow for anything outside it, and a default-deny posture on the OAuth grants that matter most: source control, cloud consoles, email, and anything touching customer data.
Ingress was the last decade's problem. Egress is the new perimeter
The most important reframing sits here. Classical enterprise security architecture is overwhelmingly concerned with ingress: who is coming in, what they are authenticated as, what they are allowed to reach. Agents invert that model. An agent's risk profile is defined largely by what it calls out to: which APIs, which data stores, which third-party models, which vendor endpoints. Context (potentially including regulated data, source code, and internal documents) leaves the enterprise every time an agent makes an outbound call to an external model provider or tool server.
In the agentic enterprise, egress governance is mandatory. That means an enforcement point in the path of every agent-originated outbound call, not just a log of it after the fact. It means scoped, short-lived credentials issued at invocation time rather than standing OAuth grants issued at installation. It means policy decisions that consider which agent is calling, on whose behalf, with what context, and to which destination. Those decisions must be able to block, redact, or require step-up approval before data crosses the boundary. It means applying guardrails at the LLM call itself: content inspection, PII redaction, and tool-use constraints enforced at the model boundary, not only at the API gateway. And it means audit trails that preserve the delegation chain with enough fidelity to answer, after an incident, exactly which agent touched which data on whose authority.
Most enterprises have none of this today. The investment required is real, but it is not larger than what the same organizations spent a decade ago to build out Ingress IAM, and the exposure being managed is comparable.
The imperative ahead
The agentic enterprise is not a future state. It is the current state, unevenly distributed and largely ungoverned. The organizations that navigate the next two years well will be those that treat agents as a first-class identity category now: registered, scoped at invocation, bounded at egress, audited end-to-end, and backed by an employee base that understands what it is authorizing when it clicks Allow.
The alternative is to wait for the incident that forces the reorganization. One has already been disclosed this month. There will be more.
Further reading: “Becoming an Agentic Enterprise with WSO2” by WSO2 sets out the broader architectural fabric for this shift: controlled ingress and egress for AI traffic, guardrails that cover both models and tools, identity and policy enforcement for humans and agents alike, and the platform foundations needed to run agents in production rather than as isolated pilots.