Blog
Product
2
minutes

Discover 7 New Major Features on Qovery

I'm thrilled to unveil a suite of groundbreaking features that we've introduced over the past four months at Qovery. Our commitment to enhancing your development and deployment experience continues to be our driving force. Recently, we shared these updates during our exhilarating public demo day, which you can watch here. Let's dive into the features that are set to redefine your interaction with Qovery.
September 26, 2025
Romaric Philogène
CEO & Co-founder
Summary
Twitter icon
linkedin icon

The 7 New Major Features

1. GCP GKE Integration

Our journey into the cloud has taken a significant leap with the integration of Google Cloud Platform's Google Kubernetes Engine (GKE). This feature allows you to seamlessly deploy and manage your applications on GKE, embracing the power and flexibility of Google Cloud.

Qovery now supports AWS, Scaleway, and Google Cloud Platform (GCP).

See documentation.

2. Bring Your Own Kubernetes (BYOK)

With BYOK, we're tearing down the walls of limitation. You can now bring any Kubernetes cluster to Qovery, whether it's on-premise, in the cloud, or even on your local machine.

This flexibility ensures that you can enjoy the Qovery experience on your terms.

See documentation

3. Helm Deployment Support

The complexity of deploying Helm charts is now a thing of the past.

Our native support for Helm deployments means you can easily deploy and manage your Helm charts directly through Qovery, streamlining your deployment processes.

See documentation

4. Port Forwarding

Security is paramount, and with our new Port Forwarding feature, you can securely connect to any service within your Qovery environment without exposing it to the public internet.

This feature ensures your internal services remain secure while accessible.

See announcement

5. Use Existing VPC

For those who require granular control over their network configurations, you can now deploy Qovery within your existing VPC.

This capability ensures that you can adhere to your organization's networking policies while leveraging Qovery's powerful platform.

6. Bulk Actions

With Bulk Actions, you can perform actions on multiple services simultaneously, reducing the time and effort required to manage your applications.

See documentation

7. Google and Microsoft Authentication

In our quest to simplify access to Qovery, we've introduced authentication options for both Google and Microsoft accounts.

Qovery Landing Page

This update means faster, more secure access to Qovery using the credentials you already trust.

What's Next?

Check out our public roadmap 😎

Conclusion

These features represent our ongoing commitment to providing you with an unmatched developer experience. They're designed to make your life easier, allowing you to focus on what you do best: building amazing applications.

We're eager to hear your thoughts on these new features. Your feedback is invaluable as it helps us tailor Qovery to better meet your needs. Try them out, push their limits, and let us know your experiences.

To explore these features, sign up or log in to your Qovery account today.

Happy coding!

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Compliance
Kubernetes
 minutes
Enforcing security baselines across 1,000s of Kubernetes clusters

The part teams consistently underestimate is that OPA Gatekeeper, the tool most people reach for first, only enforces policy at the cluster level. It blocks non-compliant resources from being created within a single cluster. Propagating consistent Gatekeeper policies across 300 clusters, and detecting when those policies drift, is a fleet orchestration problem that Gatekeeper was not designed to solve.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
AI
 minutes
7 best AI deployment platforms for production Kubernetes workloads in 2026

Training a model in a notebook is easy. What breaks teams is the step after, serving it reliably without haemorrhaging cloud budget or burying your SREs in YAML. The common trap: picking a platform that handles the model but not the surrounding stack. An AI deployment platform should orchestrate the full application graph (inference endpoints, vector databases, caching layers, and frontends) inside a single VPC, with GPU autoscaling that doesn't require a dedicated platform engineer to babysit.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Kubernetes multi-cluster architecture: the Day-2 enterprise strategy for 2026

The mistake teams make early is assuming Kubernetes namespaces provide sufficient isolation between workloads or teams. They do not. Namespaces share the control plane, the node pool, and the underlying network fabric. A misconfigured workload in one namespace can exhaust node capacity or crash the API server for every other namespace simultaneously. That is when the multi-cluster conversation starts.

Morgan Perry
Co-founder
Cloud Migration
Developer Experience
Engineering
 minutes
[Alan] From nginx to Envoy: What Actually Happens When You Swap Your Proxy in Production

Migrating from nginx Ingress to Envoy Gateway? Discover how Alan migrated 100+ services in one month, the technical hurdles they faced (like Content-Length normalization), and why staging isn't always enough.

William Occelli
Platform Engineer at Alan
DevOps
Kubernetes
 minutes
How to reduce AI infrastructure costs with Kubernetes GPU partitioning

Kubernetes assigns an entire physical GPU to a single pod by default. NVIDIA MIG solves the hardware partitioning side: one A100 becomes up to seven isolated slices. The part teams underestimate is the orchestration layer: device plugin configuration, node labeling, taints, and pod affinity rules all need to be correct before Kubernetes can actually schedule onto those slices.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
Kubernetes management in 2026: mastering Day-2 ops with agentic control

The cluster coming up is the easy part. What catches teams off guard is what happens six months later: certificates expire without a single alert, node pools run at 40% over-provisioned because nobody revisited the initial resource requests, and a manual kubectl patch applied during a 2am incident is now permanent state. Agentic control planes enforce declared state continuously. Monitoring tools just report the problem.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
6
 minutes
Kubernetes observability at scale: how to cut APM costs without losing visibility

The instinct when setting up Kubernetes observability is to instrument everything and send it all to your APM vendor. That works fine at ten nodes. At a hundred, the bill becomes a board-level conversation. The less obvious problem is the fix most teams reach for: aggressive sampling. That is how intermittent failures affecting 1% of requests disappear from your monitoring entirely.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to automate environment sleeping and stop paying for idle Kubernetes resources

Scaling your deployments to zero is only half the battle. If your cluster autoscaler does not aggressively bin-pack and terminate the underlying worker nodes, you are still paying for idle metal. True environment sleeping requires tight integration between your ingress layer and your node provisioner to actually realize FinOps savings.

Mélanie Dallé
Senior Marketing Manager

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.