Blog
Product
AWS
Kubernetes
3
minutes

AWS EKS Auto Mode with Qovery - Valuable Or Not?

At Qovery, we are closely following the development of EKS Auto Mode, a new feature from AWS designed to simplify Kubernetes management by automating various foundational components. While we recognize the effort AWS has put into this, our initial evaluation shows that EKS Auto Mode is still in its early stages and does not yet offer sufficient value to be a strong consideration for our users.
September 26, 2025
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

Why EKS Auto Mode Doesn’t Yet Fit for Qovery Users

Qovery already provides a comprehensive solution that covers (and often exceeds) the benefits offered by EKS Auto Mode. Features like compute autoscaling, GPU support, and load balancing are already handled seamlessly by our platform. For instance, Qovery has supported compute autoscaling and GPU workloads through Karpenter for over a year, and it’s worth noting that EKS Auto Mode itself relies on Karpenter for these capabilities.

Our existing Qovery Engine automates the setup, scaling, upgrades, and management of EKS clusters for our users, ensuring they experience minimal operational overhead. As a result, the introduction of EKS Auto Mode brings only a minor direct impact to our users, as they already benefit from equivalent or superior functionality provided by Qovery.

Potential Value for Qovery’s Operations

While EKS Auto Mode does not significantly enhance the user experience for our customers at this time, it could help reduce the internal effort required to maintain EKS clusters. Even though this process is already fully automated by Qovery, having AWS take responsibility for more aspects of cluster management is a promising direction. This could allow us to focus on delivering even more advanced features and optimizations for our users.

From a user perspective, there is no difference between EKS and EKS Auto Mode with Qovery

The Road Ahead for EKS Auto Mode and Qovery

Our philosophy is to support all official Kubernetes distributions from major cloud providers, including AWS, GCP, and Azure. While we are excited about AWS’s efforts to bridge the gap with offerings like GKE Autopilot, we believe EKS Auto Mode needs to mature further before it becomes a viable option for Qovery users. We plan to officially support EKS Auto Mode in the future, but only once it demonstrates clear and tangible value for our customers.

Final Thoughts

We commend AWS for introducing EKS Auto Mode, which fills a notable gap compared to offerings from GCP (Qovery has supported GKE Autopilot for 1 year now, and it's amazing). It’s a step in the right direction, and we are eager to see how this feature evolves over time. At Qovery, our commitment remains unchanged: to provide our users with the best tools and experiences, regardless of their Kubernetes infrastructure. EKS Auto Mode holds promise, but for now, the Qovery Engine continues to deliver unmatched value and operational simplicity for classic EKS management.

We look forward to keeping you updated as we continue to evaluate EKS Auto Mode and explore how it can further enhance our platform.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

AI
Compliance
 minutes
Agentic AI infrastructure: moving beyond Copilots to autonomous operations

The shift from AI copilots to autonomous agents is redefining infrastructure requirements. Discover how to build secure, stateful, and compliant Agentic AI systems using Kubernetes, sandboxing, and observability while meeting EU AI Act standards

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Effective Kubernetes management in 2026 demands a shift from manual cluster building to intent-based fleet orchestration. By implementing agentic automation on standard EKS, GKE, or AKS clusters, enterprises eliminate operational weight, prevent configuration drift, and proactively control cloud spend without vendor lock-in, enabling effective scaling across massive fleets.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Building a single pane of glass for enterprise Kubernetes fleets

A Kubernetes single pane of glass is a centralized management layer that unifies visibility, access control, cost allocation, and policy enforcement across § cluster in an enterprise fleet for all cloud providers. It replaces the fragmented practice of switching between AWS, GCP, and Azure consoles to govern infrastructure, giving platform teams a single source of truth for multi-cloud Kubernetes operations.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)

Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands. While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.