The Incident
On April 19, 2026, Vercel disclosed unauthorized access to internal systems. The entry point was not Vercel’s infrastructure, not a Vercel employee’s password, and — despite early reporting that framed this as an SCM attack — not a code-repository compromise either. It was a non-human identity: an OAuth access token belonging to Context.ai’s “AI Office Suite,” a third-party AI agent integration that a Vercel employee had authorized against their Google Workspace during onboarding.
Context.ai is an enterprise AI platform that builds agents on company-specific knowledge. In February 2026, a Context.ai employee’s workstation was infected with Lumma Stealer through a Roblox-exploit lure. The infostealer harvested credentials and OAuth tokens belonging to Context.ai’s customers, including the Vercel employee’s Google Workspace token — tied to Google client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj. Months later, the attacker replayed that token to take over the employee’s Google Workspace account, pivoted into Vercel’s Google-SSO-gated internal systems, and enumerated environment variables that had been stored in plaintext (rather than marked sensitive). A threat actor posing under the “ShinyHunters” brand — attribution the established group has publicly denied — later listed data on BreachForums at $2 million, claiming 580 employee records and assorted NPM and GitHub tokens; those further extraction claims are attacker-asserted and Vercel has not independently confirmed them. Vercel’s customer-facing hosting services remained operational, and Vercel confirmed that NPM packages it publishes were unaffected.
Vercel CEO Guillermo Rauch characterized the attack as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” MITRE ATT&CK coverage: T1528 (Steal Application Access Token), T1550.001 (Use Alternate Authentication Material: Application Access Token), and T1078.004 (Valid Accounts: Cloud Accounts).
The Authority Path That Failed
The identity carrying execution authority into Vercel was a machine identity: the OAuth token Context.ai held against a Vercel employee’s Google Workspace. The scope the token held was broad — described in post-incident reporting as “Allow All” across the employee’s workspace, covering Drive and adjacent surfaces the AI agent used for document search and generation. The scope the token exercised was far larger than the agent’s function required: once the raw token was in an attacker’s hands, it authorized full account takeover of the Vercel employee’s Workspace and, from there, traversal into Vercel’s internal SaaS gated by the same Google identity.
The trust anchor that failed first was Context.ai’s custody of the token — a vendor developer’s laptop is not a credential vault, and a long-lived OAuth grant stored there survives any endpoint compromise of the vendor. But the deeper failure sat upstream of the stealer: the AI agent held broad, long-lived OAuth grants against employee workspaces at every customer it served. A Drive-search assistant does not require tokens that survive upstream vendor compromise, and it does not require scopes that let Google-account takeover chain cleanly into internal systems beyond Drive. The gap between what the token held and what it plausibly needed to exercise was auditable before the incident — by the victim, against the specific Google client ID that Vercel would later name in its bulletin.
SecurityV0 Perspective
An organization running SecurityV0 would see nhi_compromise surface for the Context.ai AI Office Suite OAuth grant against any Vercel employee identity — starting well before April 19. The finding applies because the token held authority that was neither time-bound, narrowly scoped, nor anchored to the specific artifact the AI agent would authorize. SecurityV0 inventories every third-party OAuth grant that employees have issued against the organization’s identity provider, maps each grant’s resolved scope to the documented minimum scope the vendor needs for the feature in use, and flags the delta.
The evidence pack for this finding would show: the full list of AI-agent OAuth grants against employee Workspaces with their resolved scopes, each grant’s delta versus the vendor’s own minimum-scope documentation, the vendor’s token-custody posture where knowable (whether tokens live on vendor developer endpoints and how often they rotate), and — at the point of compromise — the specific Google client ID whose token activity deviates from the employee’s baseline. That pack gives a security team what they need before exfiltration begins: a specific OAuth client, a specific scope-vs-function gap, and a specific revoke-or-restrict decision. After the fact, the same pack answers the question Vercel’s incident responders are answering now: which of our employees held AI-agent OAuth grants to the same vendor, which of those tokens carried scope that would have survived upstream theft, and which internal systems were reachable from the victim’s Workspace session.
What To Do
- Treat every third-party AI-agent OAuth grant as a non-human identity on your attack surface. OAuth consent is not the same as sandbox. A token with Workspace scope, held indefinitely on the vendor’s infrastructure, is functionally a persistent service account in your environment. Inventory every Google and Microsoft 365 OAuth grant your employees have issued to AI platforms and rank them by the scope they carry, not by how recently they were granted.
- Reject “Allow All” and broad-scope grants — especially for AI agents. If the agent’s function is Drive search, it needs Drive-read on specific folders, not full workspace. Most AI vendors expose narrower scopes only when customers ask; the default is convenience, not least privilege. Make narrow scope a procurement requirement, not a post-onboarding cleanup task.
- Audit vendor token custody before onboarding. Ask each AI vendor four questions in writing: where do our OAuth tokens live, are they encrypted at rest with customer-specific keys, who on your team has endpoint access to a machine that holds them, and what is your token-rotation cadence on customer offboarding. If any answer is “we’ll follow up,” that is the answer.
- Monitor for AI-agent OAuth token replay. The same Google client ID used to serve a Drive-search agent should not be the session initiating bulk email rules, Workspace account recovery, or SSO assertions into internal SaaS outside Drive. A per-client-ID baseline of expected agent behavior turns token replay into an actionable signal. Google Workspace admin-log telemetry carries the client ID on every API call the token makes.
- Partition internal systems so Google-SSO is not transitively everything. The Vercel incident escalated from Workspace takeover to env-var enumeration because internal SaaS sat behind the same Google identity as the employee’s Drive. Break the chain: production consoles, secret stores, and deploy systems should gate on a separate identity (hardware-key WebAuthn, step-up MFA on a distinct IdP) that does not ride on Google session cookies.
Sources
- Vercel KB — April 2026 Security Incident
- CyberScoop — Vercel security breach: third-party attack via Context.ai and Lumma Stealer
- The Hacker News — Vercel breach tied to Context.ai hack
- Help Net Security — Vercel breached
- TechCrunch — App host Vercel confirms security incident; customer data stolen via breach at Context.ai
- BleepingComputer — Vercel confirms breach as hackers claim to be selling stolen data
- CyberInsider — Vercel confirms security incident
- Decrypt — Highly sophisticated AI-powered hackers behind Vercel breach, CEO
- GitGuardian — Vercel April 2026 incident: non-sensitive environment variables need investigation too
- Tom’s Hardware — Vercel breached after employee grants AI tool unrestricted access to Google Workspace
- MITRE ATT&CK: T1528, T1550.001, T1078.004