Grant
OAuth app, refresh token, service account, deploy key, CI secret, cloud role, or inherited platform identity.
Independent research and operating notes on AI Software Delivery Control.
Field Note / Authority Risk
AI security has mostly been discussed as a content problem: prompt leakage, data exposure, unsafe output, and model behavior. The next control problem is authority: what AI-connected tools and workflows can do once they hold OAuth grants, tokens, repo access, CI/CD permissions, or cloud credentials.
The reported Vercel/Context.ai incident is useful because it makes the pattern concrete without requiring speculation about autonomous agents. Public reporting points to a third-party AI tool compromise, delegated OAuth access, and a downstream pivot into a workplace environment.
Last updated: May 6, 2026
Vercel's April 2026 security bulletin says the incident involved unauthorized access to certain internal Vercel systems and originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. Vercel says the attacker used that access to take over the employee's Google Workspace account, then reached the employee's Vercel account and enumerated non-sensitive environment variables.
Vercel also published a Google Workspace OAuth application indicator for administrators to check. Separately, TechCrunch reported that Context.ai said attackers likely compromised OAuth tokens for some users of its AI Office Suite consumer app.
That is enough to study the control pattern. It is not enough to claim that every AI coding agent, CI/CD system, source-code repository, npm package, or cloud control plane was compromised in this incident. In fact, Vercel's bulletin says it found no evidence of npm package tampering.
Content risk asks what an AI tool can see, reveal, summarize, or generate. Authority risk asks what an AI-connected tool or workflow can do with delegated access.
That distinction matters because many AI tools are not isolated chat windows anymore. They connect to Google Workspace, Microsoft 365, GitHub, Slack, Jira, CI/CD systems, package registries, cloud accounts, document stores, and internal APIs. Once a tool holds a token or grant, the security question moves from model behavior to action authority.
The practical question is no longer only what data an AI tool can see. It is what actions an AI-assisted workflow can perform, against which systems, under whose authority, with what approval, and with what proof.
The first review should follow the grant or credential into the action path. A useful map is plain:
AI tool or workflow -> OAuth grant/token -> identity context -> action -> target system -> approval/proof
OAuth app, refresh token, service account, deploy key, CI secret, cloud role, or inherited platform identity.
The user, bot, workflow, service, repo, organization, environment, or tenant whose authority is being used.
Read, write, execute, export, approve, publish, deploy, delete, secret access, cloud API, or production-adjacent change.
Actor, credential, requested action, target, policy verdict, approver if required, validation result, and outcome.
Teams do not need a new committee to act on this pattern. They need a short, concrete review of authority-bearing AI integrations.
A normal SaaS inventory can show that a tool exists. A model inventory can show which AI systems are approved. Neither is sufficient when the risk comes from delegated authority crossing several systems.
The artifact teams need is an action inventory:
Tool: AI office assistant
Owner: Business systems
Grant: Google Workspace OAuth app
Identity context: Employee account
Reachable actions: read/export documents, create content, send or modify workspace objects depending on scope
Approval-required: broad export, external share, privileged workspace access, admin-adjacent action
Proof: OAuth grant record, app ID, user identity, requested action, target object, policy verdict, revocation record
That is the same discipline an Agent Action BOM brings to AI-assisted software delivery. The unit of analysis is not "is this AI tool safe?" It is "what can this tool or workflow touch, change, approve, or prove?"