Blog
Product
3
minutes

What’s New in Qovery Q1 2025: Faster Deployments, Smarter Scaling, and More Control

Over the last three months, we’ve focused on solving three core challenges our users face: delivering faster, improving resiliency, and gaining tighter control over cloud infrastructure. Today, we’re excited to share the new features we rolled out in Q1 2025 - all built to help teams ship faster, with more confidence, and lower operational overhead.
January 27, 2026
Romaric Philogène
CEO & Co-founder
Summary
Twitter icon
linkedin icon

Let’s dive into what’s new and what’s next for Qovery.

What's New ⚡

Karpenter for Production

Karpenter is now fully integrated and production-ready in Qovery. You can automatically scale your workloads based on real-time demand while drastically reducing compute waste 🔥.

Why it matters:

  • Up to 60% infrastructure cost savings.
  • Faster pod scheduling and optimized node utilization.
  • Zero manual tweaking - it just works.

It is ideal for teams with fluctuating workloads who want to align Kubernetes computing with real usage and avoid wasting money.

Karpenter configuration via Qovery web interface

Simplicity does not mean limited. So, you can also possibly configure additional Karpenter settings, such as the type of nodes, spot instances, consolidation, limits, and, soon, even custom node pools.

Deployment Pipeline v5

We’ve re-engineered our Deployment Pipeline from the ground up. Version 5 brings better visibility into each deployment stage.

Your developers focus on delivering features instead of wasting time debugging deployments.

Deployment History View

Quickly inspect what was deployed, when, by whom - and with which commit an image tag. The new Deployment History View makes this fully traceable.

Deployment Queuing

It’s finally here… you can now queue deployment requests within the same environment! 🎉

Remember the toaster message saying an action couldn’t be performed because a deployment was ongoing? That’s a thing of the past!

You can trigger a new deployment within the same environment, even if another one is in progress. Every new request will be added to a queue and processed as soon as the ongoing deployment is completed.

Bonus: To speed up overall deployment time, queued requests are automatically merged into a single deployment whenever possible (if triggered by the same user and with the same action).

See the documentation of this feature.

Kubernetes Ephemeral Debug Pod

We’ve introduced a new CLI command that simplifies connecting to your Kubernetes cluster without requiring credentials. This feature is restricted to admins; all connections are logged in the audit logs for security and transparency.

Technically, this feature deploys a dedicated debug pod on your cluster, preloaded with valuable tools like kubectl and k9s. It’s an invaluable resource for debugging or investigating issues directly from your local machine.

What’s Coming Next 🔮

Here’s what you can expect in Q2 2025:

  • Built-in Observability and Monitoring: Track CPU, memory, network, and app logs directly inside Qovery.
  • DevOps AI Agent: Automatically analyze misconfigurations, suggest infra optimizations, and even detect cost anomalies.
  • Qovery on VMware (EKS Anywhere): For companies running on-premise or hybrid, we’re bringing Qovery to VMware with EKS-A support.
  • AKS (Azure Kubernetes Service) Support: With Karpenter, for those running in Microsoft Azure.
  • AWS STS (Service Token Service): Enabling fine-grained access management and secure cross-account workflows.

The Future of Qovery 🎉

Qovery will be an all-in-one DevOps platform for 2025

Qovery is evolving into the complete Kubernetes management platform that combines:

  • Infrastructure Management
  • CI/CD
  • Observability
  • FinOps
  • SecOps

By Q3/Q4 2025, our mission is to deliver a self-service platform that gives dev teams the speed of Heroku, the power of Kubernetes, and the financial transparency of a FinOps-native platform - all in one.

It's a Wrap

Whether you’re scaling your platform, controlling cloud costs, or accelerating releases, our latest features are designed to give you the leverage you need. As always, you can follow our full changelog at qovery.com/changelog, and if you want a live demo or to explore any of these features, just reach out.

Stay tuned for the next wave of releases - and thank you for building with Qovery.

--

Feel free to watch the full demo day 👇🏼

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
 minutes
Stopping Kubernetes cloud waste: agentic automation for enterprise fleets

Agentic Kubernetes resource reclamation is the practice of using an autonomous control plane to continuously identify, suspend, and delete idle infrastructure across a multi-cloud Kubernetes fleet. It replaces manual cleanup and reactive autoscaling with intent-based policies that act on business state, eliminating the configuration drift and cloud waste typical of unmanaged fleets.

Mélanie Dallé
Senior Marketing Manager
Platform Engineering
Kubernetes
DevOps
10
 minutes
Kubernetes: the enterprise guide to fleet management at scale

Kubernetes is an open-source platform that automates the deployment, scaling, and management of containerized applications. While originally designed to orchestrate single-cluster workloads, modern enterprise use cases require managing Kubernetes at fleet scale, coordinating thousands of clusters across multi-cloud environments to enforce cost governance, security policies, and automated lifecycle management.

Morgan Perry
Co-founder
AI
Compliance
 minutes
Agentic AI infrastructure: moving beyond Copilots to autonomous operations

The shift from AI copilots to autonomous agents is redefining infrastructure requirements. Discover how to build secure, stateful, and compliant Agentic AI systems using Kubernetes, sandboxing, and observability while meeting EU AI Act standards

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Effective Kubernetes management in 2026 demands a shift from manual cluster building to intent-based fleet orchestration. By implementing agentic automation on standard EKS, GKE, or AKS clusters, enterprises eliminate operational weight, prevent configuration drift, and proactively control cloud spend without vendor lock-in, enabling effective scaling across massive fleets.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Building a single pane of glass for enterprise Kubernetes fleets

A Kubernetes single pane of glass is a centralized management layer that unifies visibility, access control, cost allocation, and policy enforcement across § cluster in an enterprise fleet for all cloud providers. It replaces the fragmented practice of switching between AWS, GCP, and Azure consoles to govern infrastructure, giving platform teams a single source of truth for multi-cloud Kubernetes operations.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)

Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands. While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.