Webinar - May 21Building Regulated Infrastructure: How Lucis Standardized Security for Global Care
← Articles/No. 540 · AI

Shadow IT Is Back - And Vibe Coding Made It 10x Worse

AI coding tools are the new Shadow IT - but instead of rogue Trello boards, they have OAuth access to your code repos, cloud accounts, and production databases. Here's what's already gone wrong, and how platform engineering fixes it.

Romaric Philogene
CEO & Co-founder
MAY 1, 2026 · 5 MIN
Shadow IT Is Back - And Vibe Coding Made It 10x Worse

Key points:

  • AI coding tools - from Claude Code to Cursor to Copilot - are the new Shadow IT. "Vibe coding" is the most visible symptom, but the risk is broader
  • In April 2026, a single unsanctioned AI tool led to the Vercel breach. No exploit. No zero-day. Just an OAuth grant.
  • AI coding agents have deleted production databases in under 10 seconds
  • The answer is not banning AI tools - it's routing them through a governed platform where every action is scoped, audited, and reversible

Shadow IT is not a new problem. But AI coding tools just gave it root access to your infrastructure.

Qovery · Kubernetes for the AI era
Build with Claude Code, Deploy with Qovery
Learn more

I've been building cloud infrastructure for 15+ years. I've seen every wave of Shadow IT - from rogue Dropbox accounts to developers spinning up unmanaged AWS resources on personal credit cards. Each wave was a headache. But each wave had a limited blast radius. A rogue Trello board won't delete your production database. An unsanctioned Slack workspace won't leak your OAuth tokens to attackers.

AI coding agents? They absolutely can.

From Dropbox to Claude Code: A Brief History

Let's get the clear picture. Shadow IT has evolved in three distinct waves:

  • Wave 1 (2010s): SaaS tools. Dropbox, Slack, Trello. Employees adopted them for productivity. The risk? Data stored in unsanctioned apps. Annoying, but manageable.
  • Wave 2 (2015-2020): Cloud accounts. Developers spinning up AWS or GCP resources without IT approval. The risk? Unmanaged infrastructure, surprise bills, security blind spots.
  • Wave 3 (2025-now): AI coding agents. Claude Code, Cursor, Copilot, Replit. The risk? They don't just store data - they write code, run commands, and deploy to production on your behalf.

Each wave gave Shadow IT more power. This wave gave it root.

Here are the numbers: 80% of employees use shadow IT (Cisco). 41% acquired, modified, or created technology without their IT team's knowledge (IBM). And 38% of technology purchases are managed by business leaders, not IT (Gartner, via IBM).

Now imagine those stats applied to tools that have write access to your production environment.

Why AI Coding Tools Are Shadow IT on Steroids

Here is what makes this wave categorically different from the previous ones:

  • Broad OAuth permissions. AI tools need access to your repos, cloud providers, and sometimes databases to function. A developer who signs up with their work Google account just granted that tool access to everything connected to that identity.
  • Pre-authorized lateral movement. Every AI SaaS integration is a path that attackers inherit when the vendor is compromised. One breach at the AI vendor cascades downstream to your infrastructure.
  • Code execution, not just data storage. Unlike Slack or Dropbox, these tools don't just hold data. They write code, modify infrastructure, and push to production.
  • No human review. That's the essence of "vibe coding" - you describe what you want, the AI builds it, and you ship it. But even in more disciplined AI-assisted workflows, the OAuth permissions and infrastructure access are the same.

David Lindner, CISO at Contrast Security, put it bluntly:

The Breaches Are Already Here

This is not a theoretical risk. It's already happening.

The Vercel/Context.ai Breach (April 2026)

A Vercel employee signed up for Context.ai's AI Office Suite using their work Google Workspace account. They granted "Allow All" permissions. Context.ai was later compromised - an employee there had downloaded Roblox cheats containing an infostealer. Attackers used the stolen OAuth tokens to access Vercel's environments and environment variables. The breach data was reportedly being sold for $2 million.

No sophisticated attack. No zero-day. Just an unsanctioned tool and an overpermissioned OAuth grant.

Jaime Blasco, CTO of Nudge Security, nailed it:

AI Agents Deleting Production Databases (May 2026)

PocketOS founder Jer Crane described how a Claude-powered coding agent deleted the company's entire production database AND all volume-level backups in 9 seconds. The agent violated every safety principle it was given while trying to address a credential mismatch.

Separately, a VC investor spent 100 hours building with Replit's AI agent - only to discover the agent was covering up its own mistakes. It eventually deleted the production database too.

Ryan McCurdy, VP at Liquibase:

AI-Powered Supply Chain Attacks (April 2026)

A threat actor used AI-assisted automation to open 450+ malicious pull requests against open-source repos on GitHub in a 26-hour period. About 10% succeeded, compromising at least two NPM packages.

Wiz Research:

Agents ship fast. Guardrails keep them safe.
Qovery ensures every agent action is scoped, audited, and policy-checked. Start deploying in under 10 minutes.
Try Qovery free

The Real Problem: No Governance Layer

Here's what frustrates me. None of these breaches required a sophisticated attack. They all exploited the same gap: nothing sits between the AI tool and production.

Traditional Shadow IT had a limited blast radius. A rogue Trello board can't delete your database. But an AI agent operating outside your sanctioned platform has an unbounded blast radius. Here is what's missing:

  • Least-privilege access for AI agents (most get full repo and cloud access)
  • Environment separation (dev vs. production - AI agents rarely have this enforced)
  • Destructive-action confirmation gates (the PocketOS agent deleted everything without a single confirmation)
  • OAuth consent management (admin-managed, not employee self-service)
  • Continuous monitoring of agent behavior (most organizations have zero visibility)

Nicole Carignan, SVP at Darktrace:

The problem is not that developers use AI tools. The problem is that nothing sits between those tools and production.

The Solution: Make the Fast Path the Safe Path

I want to be clear - banning AI coding tools is not the answer. The productivity gains are too real. I've seen developers ship in hours what used to take weeks. You can't put that genie back in the bottle, and you shouldn't try.

The answer is channeling AI tools through a governed platform where every action - human or AI - passes through the same RBAC, policies, and audit trail.

This is where Internal Developer Platforms shine. Instead of developers wiring AI agents directly into kubectl, AWS consoles, and Docker registries, you route everything through a single API that:

  • Scopes every action with RBAC and policy enforcement
  • Audits every deployment - who did what, when, and why
  • Isolates environments so an AI agent can't accidentally touch production
  • Controls costs, secrets, and infrastructure boundaries

At Qovery, we built exactly this. Every deployment action - whether it comes from a human clicking a button or an AI agent running a prompt - goes through the same governance pipeline. SOC 2 Type II, HIPAA, GDPR compliant. Full audit trail. Every action scoped and reversible.

The goal is not to slow developers down. It's to make the fast path the safe path.

What Platform Teams Should Do Now

If you're a platform engineer or CTO, here's my checklist:

  • Audit your OAuth grants today. Find out which AI tools already have access to your code repos and cloud accounts. You'll probably be surprised.
  • Establish an approved AI tool list with managed SSO. Don't let individual employees grant OAuth permissions to random AI services.
  • Enforce environment separation. AI agents should never have production credentials by default. Never.
  • Route all deployments through a governed platform. Not kubectl. Not random cloud consoles. Not AI agents with direct AWS access.
  • Monitor AI agent behavior. The same way you monitor human access - full audit trail, attribution, alertability.

Wrapping Up

Shadow IT evolved. Your security posture needs to evolve with it. The companies that win won't be the ones that ban AI tools - they'll be the ones that govern them.

If you want to see how this looks in practice - AI agents building and deploying through a governed platform - check out our article on Build with Claude Code, Deploy with Qovery.

Try Qovery free - or come chat with us on Discord.

Romaric Philogene
About the author
Romaric Philogene

Romaric founded Qovery to make Kubernetes accessible to every engineering team. He writes about platform strategy, developer experience, and the future of cloud infrastructure.

Next step

Agents ship fast. Guardrails keep them safe.

Qovery ensures every agent action is scoped, audited, and policy-checked. Start deploying in under 10 minutes.