Blog
Product
Infrastructure Management
Deployment
minutes

Stop tool sprawl - Welcome to Terraform/OpenTofu support

Provisioning cloud resources shouldn’t require a second stack of tools. With Qovery’s new Terraform and OpenTofu support, you can now define and deploy your infrastructure right alongside your applications. Declaratively, securely, and in one place. No external runners. No glue code. No tool sprawl.
December 22, 2025
Alessandro Carrano
Head of Product
Summary
Twitter icon
linkedin icon

Too Many Tools, Too Much Overhead

Provisioning cloud infrastructure and deploying applications have traditionally lived in separate silos. Teams use tools like Atlantis, Spacelift, or custom runners to manage Terraform or OpenTofu. Then, they turn to ArgoCD, Flux, or Qovery to deploy their applications.

The result?

Fragmented workflows, inconsistent deployment timing, fragile CI scripts, and a constant back-and-forth between tools just to get a working environment up and running.

If your infra isn’t ready, your app deployment fails. If your app needs outputs from Terraform, someone has to wire them together manually. It works, but it’s painful and hard to scale.

The Qovery Platform: One Environment, One Control Plane

Qovery was built to simplify the application lifecycle by unifying it inside your own Kubernetes cluster, on your infrastructure, with your security, under your control.

Now, with Terraform & OpenTofu native support, Qovery extends that same control to infrastructure provisioning. You can deploy everything from a single environment: no CI glue, no handoffs, no tool sprawl.

This feature isn’t a side add-on. It’s a natural extension of how Qovery environments work.

You can specify deployment order between infrastructure resources and applications, pass outputs as environment variables for workloads, and manage the full stack lifecycle directly in Qovery, all running securely inside your Kubernetes cluster.

Outcomes for Your Team

With Terraform & OpenTofu support, you’ll get:

  • Fewer scripts: no more custom CI jobs to glue Terraform to app deployments
  • Consistent deployments: define the full stack once, deploy it the same way every time
  • Less waiting on DevOps: developers can self-serve infra with guardrails
  • No tool sprawl: one platform to manage infra and apps together

A Realistic Example: From Three Tools to One Platform

One of our users was running Terraform through Atlantis, applications through ArgoCD, and using CI scripts to pass values between them. The process worked but was fragile and hard to scale. Any change required coordination across repos, tooling, and teams.

They moved to Qovery’s native Terraform support, defined their infrastructure and applications in the same environment, set the proper deployment order (RDS → seed job → backend), and removed dozens of lines of CI logic. Now, it’s all handled by Qovery: in one flow, with full visibility.

Read more: Cut Tool Sprawl: Automate Your Tech Stack with a Unified Platform

Deploy Your Manifest in 3 Simple Steps

  1. Add a new service of type “Terraform” inside your existing Qovery environment
  2. Connect your Git repository containing the Terraform or OpenTofu manifest, set inputs automatically fetched by Qovery from your manifest, define the state location and define the deployment order
  3. Execute plan or apply: Qovery will manage the lifecycle, handle remote state, and inject outputs as environment variables for your other services to consume

Want to see it in action? Check the demo below:

Try It Today

Ready to simplify your infra and app deployments?

Try it out today by adding a new resource via Terraform directly on an existing environment.

Need help migrating from Atlantis or custom scripts? We’re here to help.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
 minutes
Stopping Kubernetes cloud waste: agentic automation for enterprise fleets

Agentic Kubernetes resource reclamation is the practice of using an autonomous control plane to continuously identify, suspend, and delete idle infrastructure across a multi-cloud Kubernetes fleet. It replaces manual cleanup and reactive autoscaling with intent-based policies that act on business state, eliminating the configuration drift and cloud waste typical of unmanaged fleets.

Mélanie Dallé
Senior Marketing Manager
Platform Engineering
Kubernetes
DevOps
10
 minutes
What is Kubernetes? The reality of Day-2 enterprise fleet orchestration

Kubernetes focuses on container orchestration, but the reality on the ground is far less forgiving. Provisioning a single cluster is a trivial Day-1 exercise. The true operational nightmare begins on Day 2. Teams that treat multi-cloud fleets like isolated pets inevitably face crushing YAML configuration drift, runaway AWS bills, and severe scaling bottlenecks.

Morgan Perry
Co-founder
AI
Compliance
Healthtech
 minutes
Agentic AI infrastructure: moving beyond Copilots to autonomous operations

The shift from AI copilots to autonomous agents is redefining infrastructure requirements. Discover how to build secure, stateful, and compliant Agentic AI systems using Kubernetes, sandboxing, and observability while meeting EU AI Act standards

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Effective Kubernetes management in 2026 demands a shift from manual cluster building to intent-based fleet orchestration. By implementing agentic automation on standard EKS, GKE, or AKS clusters, enterprises eliminate operational weight, prevent configuration drift, and proactively control cloud spend without vendor lock-in, enabling effective scaling across massive fleets.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Building a single pane of glass for enterprise Kubernetes fleets

A Kubernetes single pane of glass is a centralized management layer that unifies visibility, access control, cost allocation, and policy enforcement across § cluster in an enterprise fleet for all cloud providers. It replaces the fragmented practice of switching between AWS, GCP, and Azure consoles to govern infrastructure, giving platform teams a single source of truth for multi-cloud Kubernetes operations.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)

Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands. While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.

Mélanie Dallé
Senior Marketing Manager
Qovery
Cloud
AWS
Kubernetes
8
 minutes
10 best practices for optimizing Kubernetes on AWS

Optimizing Kubernetes on AWS is less about raw compute and more about surviving Day-2 operations. A standard failure mode occurs when teams scale the control plane while ignoring Amazon VPC IP exhaustion. When the cluster autoscaler triggers, nodes provision but pods fail to schedule due to IP depletion. Effective scaling requires network foresight before compute allocation.

Morgan Perry
Co-founder
Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.