Blog
AI
Infrastructure Management
Product
5
minutes

GPU workloads on EKS just got way simpler with Qovery

Running GPU workloads on EKS has never been easy, until now. With Qovery’s latest update, you can enable GPU nodes, configure GPU access, and optimize costs automatically, all without writing a single line of YAML or touching Helm charts. Qovery now handles everything behind the scenes so you can focus entirely on your applications.
Alessandro Carrano
Lead Product Manager
Summary
Twitter icon
linkedin icon

Key Points:

  • Qovery radically simplifies GPU provisioning on EKS: It automates the entire complex process, which previously involved manually defining node pools, configuring YAMLs, installing NVIDIA plugins, and modifying application manifests; reducing it to a few simple steps and eliminating significant DevOps overhead.
  • Controlled and cost-optimized GPU access: Users can easily enable GPU node pools, select instance types (mixing On-Demand and Spot instances for cost/performance), define cluster-wide limits, and specify the GPU needs per application.
  • Automatic setup and optimization: Qovery automatically handles the technical backend, including installing and configuring the NVIDIA Kubernetes Device Plugin and ensuring cost efficiency by selecting the best instance type combinations through its Karpenter implementation.

Run GPU-Powered applications on an EKS cluster in minutes

Whether you’re training models, running inference pipelines, or powering compute-heavy workloads, Qovery makes GPU provisioning on a EKS cluster simple as that:

  • Enable the GPUs on your cluster:
    • enable GPU node pools in your cluster settings
    • choose the instance types you need.
    • Mix On-Demand and Spot instances for the best balance of cost and performance.
    • Define limits on the number of GPUs that can be used overall by your cluster.
  • Define GPU needs per application: specify how many GPUs your app requires. Qovery handles provisioning, scheduling, and placement for you.
<\div>

Qovery's Provision (our Kubernetes deployment platform) automatically takes care of:

  • NVIDIA plugin setup: Qovery installs and configures the NVIDIA Kubernetes Device Plugin automatically, so your applications can access GPUs right away.
  • Cost optimization: Qovery selects the most cost-effective instance type combination based on your workload’s GPU requirements, thanks to the Karpenter implementation.

What it used to look like (before Qovery)

Before this release, setting up GPU workloads on Kubernetes meant doing everything yourself:

  1. Manually define and deploy GPU node pools
    • Create YAML or CLI definitions for GPU-capable nodes, labels, taints, and autoscaling.
    • Configure Spot instance handling and scaling behavior.
  2. Install NVIDIA components
    • Add the NVIDIA Helm charts for the device plugin and drivers.
    • Manage chart values, version compatibility, and updates across clusters.
  3. Modify application manifests
    • Add resources.requests/limits for nvidia.com/gpu.
    • Set node selectors, tolerations, and affinities for GPU nodes.
    • Tune and redeploy Helm charts for GPU access.
  4. Maintain and optimize over time
    • Monitor GPU utilization and keep costs under control.
    • Update plugins and drivers as Kubernetes versions evolve.

Now, Qovery does all that for you.

Why this matters

GPU workloads are core to modern applications but Kubernetes wasn’t built to make GPU management simple.

Qovery bridges that gap by abstracting away the complexity. In just a few clicks, your applications can access powerful GPUs, without the DevOps overhead.

Get started

You can start using GPU node provisioning today.

Check out our documentation to learn how to enable GPU support for your clusters and applications.

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

Kubernetes
3
 minutes
NGINX Ingress Controller End of Maintenance by March 2026

Kubernetes NGINX ingress maintainers have announced that the project will move into end-of-life mode and stop being actively maintained by March 2026. Parts of the NGINX Kubernetes ecosystem are already deprecated or archived.

Romaric Philogène
CEO & Co-founder
DevOps
 minutes
The 10 Best Octopus Deploy Alternatives for Modern DevOps

Explore the top 10 Octopus Deploy alternatives for modern DevOps. Find the best GitOps and cloud-native Kubernetes delivery platforms.

Mélanie Dallé
Senior Marketing Manager
AWS
Cloud
Business
8
 minutes
6 Best AWS Deployment Options to Consider

Deploying on AWS efficiently is key. See the updated guide on the best AWS deployment options, covering new features and services.

Morgan Perry
Co-founder
Cloud
Kubernetes
 minutes
The High Cost of Vendor Lock-In in Cloud Computing and How to Avoid it

Cloud vendor lock-in threatens agility and raises costs. Discover the high price of proprietary services, egress fees, and technical entrenchment, plus the strategic roadmap to escape. Learn how embracing open standards, Kubernetes, and an exit strategy from day one ensures long-term flexibility and control.

Mélanie Dallé
Senior Marketing Manager
DevOps
 minutes
The Top 10 Porter Alternatives: Finding a More Flexible DevOps Platform

Looking for a Porter alternative? Discover why Qovery stands out as the #1 choice. Compare features, pros, and cons of the top 10 platforms to simplify your deployment strategy and empower your team.

Mélanie Dallé
Senior Marketing Manager
AWS
Deployment
 minutes
AWS App Runner Alternatives: Top 10 Choices for Effortless Container Deployment

AWS App Runner limits control and locks you into AWS. See the top 10 alternatives, including Qovery, to gain crucial customization, cost efficiency, and multi-cloud flexibility for containerized application deployment.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Kubernetes Management: Best Practices & Tools for Managing Clusters and Optimizing Costs

Master Kubernetes management and cut costs with essential best practices and tools. Learn about security, reliability, autoscaling, GitOps, and FinOps to simplify cluster operations and optimize cloud spending.

Mélanie Dallé
Senior Marketing Manager
AWS
GCP
Azure
Cloud
Business
10
 minutes
10 Best AWS Elastic Beanstalk Alternatives

AWS Elastic Beanstalk is often rigid and slow. This guide details the top 10 Elastic Beanstalk alternatives—including Heroku, Azure App Service, and Qovery—comparing the pros, cons, and ideal use cases for achieving superior flexibility, faster deployments, and better cost control.

Morgan Perry
Co-founder

It’s time to rethink
the way you do DevOps

Say goodbye to DevOps overhead. Qovery makes infrastructure effortless, giving you full control without the trouble.