Blog
Product
Infrastructure Management
5
minutes

Migrating from NGINX Ingress to Envoy Gateway (Gateway API): behind the scenes

Following the end of maintenance of the Ingress NGINX project, we have been working behind the scenes to migrate our customers’ clusters from Kubernetes Ingress + NGINX Ingress Controller to Gateway API + Envoy Gateway.
February 23, 2026
Benjamin Chastanier
Software Engineer
Summary
Twitter icon
linkedin icon

Following the end of maintenance of the Ingress NGINX project, we have been working behind the scenes to migrate our customers' clusters from Kubernetes Ingress + NGINX Ingress Controller to Gateway API + Envoy Gateway.

This article covers the full picture: why we're making this move, why we chose Envoy Gateway over other implementations, how we're rolling it out across 300+ clusters in four phases, and what changes for you as a Qovery customer, including the advanced settings that behave differently between NGINX and Envoy.

This kind of migration is only successful if it is boring: predictable, measurable, and easy to roll back.

This is not a “fun refactor.” It is work we have to do because:

  • Ingress NGINX is going end-of-maintenance in March 2026.
  • We want to move to the Kubernetes API that is meant to replace Ingress: Gateway API.

The main requirement on our side is simple to state and hard to execute: customers should not see downtime during the switch.

Context: why we had to move

Since Qovery’s early days, we relied on Kubernetes Ingress resources and NGINX Ingress Controller to route traffic from the public internet to services running in customer clusters.

In late 2025 and early 2026, Kubernetes announced that the Ingress NGINX project would no longer be maintained starting March 2026 (announcement and statement). No maintenance means no security fixes and no evolution.

We could have treated this as “find another ingress controller and keep going.” Instead, we decided to use this deadline to finish a shift we already wanted to make: adopting the Kubernetes Gateway API (more info).

Why Gateway API instead of sticking with Ingress

Ingress has served the Kubernetes ecosystem well, but it has limits.

Gateway API is the community’s attempt to fix those limits without relying on controller-specific annotations:

  • The Ingress API is effectively “feature complete” and will not be extended further.
  • Ingress implementations depend heavily on annotations.
  • Annotation-based configuration is hard to standardize across providers and controllers.
  • The API surface is limited for modern traffic management needs.

If you want a deeper overview of the rationale, the Gateway API maintainers have an excellent guide: Migrating from Ingress to Gateway API.

Why we picked Envoy Gateway

We evaluated several Gateway API implementations. Envoy Gateway was the best fit for our constraints and what we want to support long term.

General reasons

  • Gateway API-first: built to support Kubernetes Gateway API (GA), with more expressive routing and cleaner separation of roles than Ingress.
  • Open source: part of the broader Envoy ecosystem.
  • CNCF graduated: strong governance and long-term viability.
  • Actively maintained.
  • Performance: strong throughput and latency characteristics, especially under high concurrency.
  • Cloud-native design: clear separation of control plane and data plane, lightweight, and well suited to multi-tenant environments.

Qovery-specific reasons

  • Provider-agnostic: consistent configuration across providers, with fewer implementation-specific hacks.
  • Extensibility and observability: Envoy is strong on telemetry and deep customization.
  • Standardization and portability: the Gateway API reduces annotation sprawl and improves manifest portability.
  • Unlocks traffic management features (over time), such as:
    • Canary and blue-green rollouts.
    • Weighted traffic shifting.
    • Traffic mirroring.
    • Traffic replay.

💡 If you run on Qovery, you will see these phases roll out progressively. If you do not, the rollout approach may still be useful if you need to swap your ingress layer without taking downtime.

Qovery’s migration goal (and constraints)

The goal is concrete: migrate all customer clusters (300+) from Ingress + NGINX to Gateway API + Envoy by the end of March 2026.

The constraints matter more than the architecture diagram:

  • No downtime.
  • No surprises. Customers should be able to validate behavior before the switch.
  • Backwards compatibility during the transition. We cannot flip everything in one shot.
  • Operational clarity. We need to see what is happening (logs, metrics, and failure modes) while two stacks coexist.

To get there, our work fell into three buckets:

  • Selecting the best Gateway API-compatible replacement for NGINX Ingress Controller.
  • Making the Qovery stack compatible with Envoy + Gateway API while keeping backwards compatibility during rollout.
  • Designing a step-by-step migration path customers can follow safely.

Target architecture

Here is the end state we are aiming for:

target architecture

Qovery’s rollout strategy (4 phases)

Changing the edge routing layer changes how every application is reached. We are doing it progressively so problems show up early, when rollback is still easy.

The rollout is split into four phases:

Phase 0: NGINX only (current default)

This is the starting point: NGINX Ingress Controller is installed and routing is configured through Ingress resources.

Phase 1: Deploy Gateway API + Envoy next to NGINX (shadow mode)

In this phase, we deploy the new stack alongside NGINX.

It is installed and configured, but it does not serve production traffic by default.

Everything new appears in purple in the diagram below.

What this introduces

  • Envoy Gateway and Gateway API infrastructure.
  • Services are also registered into the Gateway API stack.

What this changes for customers

  • Services become accessible through a temporary CNAME for safe testing.

Phase 2: Gateway API + Envoy becomes the default routing path

This is the real switchover. Envoy Gateway becomes the default entry point and requests are served by Envoy.

The main CNAME is updated to point to the Load Balancer managed by the Gateway API stack.

We intentionally keep NGINX around during this phase because DNS caching means the transition can take 15 to 20 minutes (and sometimes longer depending on client configuration).

What this introduces

  • Main CNAME updated to point to the Envoy + Gateway API Load Balancer.

What this changes for customers

  • All domains (Qovery-provided domains and custom domains) are served through Gateway API + Envoy.

Phase 3: Remove the NGINX stack

Once we have confidence that traffic is fully served by Envoy, we remove the NGINX stack.

What this introduces

  • NGINX components are removed from the cluster.

What this changes for customers

  • No NGINX Load Balancer.
  • No NGINX-specific configuration paths.

Advanced settings recap (NGINX → Envoy)

For most workloads, the migration is straightforward. But there are a few NGINX configuration (mapped as Qovery advanced settings) knobs where the model changes.

Sometimes there is a direct equivalent.

Sometimes there is not, because Envoy and NGINX solve the problem differently.

The tables in this article highlight the main differences, so you can review them before and during the rollout.

Our customers' migration plan

Here you can find the migration plan that we have prepared for our customers.

The migration implies to:

  • Test how Gateway API behaves with your services using the Phase 1 temporary CNAME.
  • Review service and cluster advanced settings in light of the differences above.
  • Retrieve Envoy logs when debugging routing behavior.
  • Monitor the dual-stack period (NGINX + Envoy) to validate the switch.

Start on Qovery today- join our 14-day free trial

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

By adopting GitOps and utilizing management platforms like Qovery, fintech teams can automatically generate DORA-compliant audit trails, transforming regulatory compliance from a manual, time-consuming chore into an automated, native byproduct of their infrastructure.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.