Let AI agents
manage your infrastructure.
The Qovery MCP Server exposes Qovery's full API surface to AI coding agents via the Model Context Protocol. Claude, Cursor, OpenCode, and any MCP-compatible tool can deploy, scale, and debug your services by conversation.
What you get.
Standard MCP protocol
Implements the open Model Context Protocol. Any MCP-compatible AI agent can connect -- no custom integration code required.
Full API coverage
Every Qovery action is available: deploy, scale, clone environments, manage variables, tail logs, check status. The agent has the same power as the CLI.
Works with popular agents
Tested with Claude Code, Cursor, OpenCode, Codex, and Gemini CLI. Install the Qovery MCP server and start deploying by conversation.
Secure by design
Runs locally. Uses your existing Qovery API token. No credentials leave your machine. RBAC is enforced server-side.
What you can do.
Conversational deploys
"Deploy the backend to staging" -- the agent resolves the service, picks the latest commit, triggers the deployment, and reports back.
AI-assisted debugging
"Why is the payment service failing?" -- the agent checks deployment status, tails logs, inspects environment variables, and suggests fixes.
Environment management
"Clone production to a new QA environment and scale it down" -- the agent handles the full workflow in one conversation.
Connect your AI agent
- 01Install the Qovery CLI (includes the MCP server)
- 02Authenticate: qovery auth
- 03Start the MCP server: qovery mcp start
- 04Configure your AI agent to connect to the local MCP endpoint
$ brew install qovery-cli $ qovery auth $ qovery mcp start # In your AI agent config: # MCP endpoint: stdio://qovery-mcp
Deploy by
conversation.
Connect your AI coding agent to Qovery. Deploy, debug, and manage infrastructure without leaving your editor.