Blog
AWS
Kubernetes
3
minutes

Deploying AI Apps with GPUs on AWS EKS and Karpenter

As AI and machine learning workloads continue to grow in complexity and size, the need for efficient and scalable infrastructure becomes more important than ever. In this tutorial, I will show you how to deploy AI applications on AWS Elastic Kubernetes Service (EKS) with Karpenter from scratch, leveraging GPU resources for high-performance computing. We'll use Qovery, an Internal Developer Platform that simplifies the deployment and management of applications, ensuring developers can focus on building their applications rather than managing infrastructure.
Romaric Philogène
CEO & Co-founder
Summary
Twitter icon
linkedin icon

Why Use AWS EKS with Karpenter

AWS EKS provides a managed Kubernetes service that simplifies running Kubernetes without needing to install, operate, and maintain your own cluster control plane. Combined with Karpenter, an open-source, high-performance Kubernetes cluster autoscaler, you get a flexible and cost-effective solution that can efficiently manage the provisioning and scaling of nodes based on the application's requirements.

Karpenter specifically helps handle variable workloads by provisioning the right resources at the right time, which is ideal for AI applications with sporadic or compute-intensive tasks requiring GPU capabilities. (read this article I wrote to learn more)

Install AWS EKS and Karpenter with Qovery

To begin, you'll need to set up AWS EKS and Karpenter. Qovery integrates seamlessly into your AWS environment, allowing you to set up EKS with Karpenter with just a few clicks:

  1. Create a Qovery account: connect to the Qovery web console.
  2. Create AWS EKS: Add your AWS EKS cluster and choose the region and configure your cluster specifications.
  3. Enable Karpenter: With the cluster ready, install Karpenter directly from the cluster advanced settings. Qovery automates the integration process, ensuring Karpenter aligns with your EKS settings for optimal performance.
Enable Karpenter for AWS EKS Cluster managed by Qovery

Install NVIDIA device plugin on AWS EKS

The NVIDIA device plugin for Kubernetes is an implementation of the Kubernetes device plugin framework that advertises GPUs as available resources to the kubelet.

This plugin is necessary as it helps manage GPU resources available to Kubernetes pods. For that, we will use the official NVIDIA Helm Chart.

Helm Repository: https://nvidia.github.io/k8s-device-plugin
Helm Chart: nvidia-device-plugin
Helm Version: 0.15.0

With Qovery, you simply need to navigate to Organization Settings > Helm Repositories > Click "Add repository"

Add your NVIDIA Helm Repository 1/2

Then register the NVIDIA repository "https://nvidia.github.io/k8s-device-plugin"

Add your NVIDIA Helm Repository 2/2

Then, I recommend creating a "Tooling" project with a "NVIDIA" environment. ⚠️ Select your EKS with Karpenter cluster.

Create your NVIDIA environment on your AWS EKS with Karpenter cluster

Then you can create a Helm service "nvidia device plugin".

Now, you can deploy the "nvidia device plugin" service to install it on your EKS cluster.

Deploy an App Using a GPU

Deploying an AI application that uses a GPU can be streamlined using Qovery's Helm chart capabilities:

  1. Prepare your application with a Dockerfile and Helm chart: Make sure your application is containerized and ready for deployment.
  2. Push your code to a Git repository connected to Qovery.
  3. Use Qovery to deploy your application: Through the Qovery dashboard, set up your application deployment using the Helm chart, which should specify the necessary GPU resources via nodeSelector.
nodeSelector:
karpenter.sh/nodepool: gpu

Bonus: Using Spot Instances

To further optimize costs, use AWS Spot Instances for your GPU workloads. With Qovery, you can enable Spot Instances in the cluster's advanced settings:

  1. Navigate to the cluster advanced settings in Qovery.
  2. Set "aws.karpenter.enable_spot" to "true". Qovery handles the integration seamlessly, providing cost savings while ensuring resource availability for your applications.
Enable spot instances for AWS EKS with Karpenter

Conclusion

By combining AWS EKS with Karpenter and utilizing Qovery for deployment automation, you can streamline the deployment and management of AI applications that require GPU resources. This setup enhances performance and optimizes costs, making it an excellent choice for developers seeking to deploy AI applications at scale efficiently.

Begin deploying your AI apps today with Qovery and unlock the full potential of cloud-native technologies.

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

Kubernetes
DevOps
 minutes
Best CI/CD tools for Kubernetes: Streamlining the cluster

Static delivery pipelines are becoming a bottleneck. The best CI/CD tools for Kubernetes are those that move beyond simple code builds to provide total environment orchestration and developer self-service.

Mélanie Dallé
Senior Marketing Manager
DevOps
Cloud
 minutes
Top 10 vSphere alternatives for modern hybrid cloud orchestration

The Broadcom acquisition of VMware has sent shockwaves through the enterprise world, with many organizations facing license cost increases of 2x to 5x. If you are looking to escape rising TCO and rigid subscription bundles, these are the top vSphere alternatives for a modern hybrid cloud.

Mélanie Dallé
Senior Marketing Manager
DevOps
Heroku
 minutes
Top 10 Heroku Postgres competitors for production databases

Escape rising Heroku costs and rigid limitations. Discover the best Heroku Postgres competitors that offer high availability, global scaling, and the flexibility to deploy on your own terms.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
Heroku
 minutes
Top 10 GitLab alternatives for DevOps teams

Is GitLab bloat slowing down your engineering team? Compare the top 10 GitLab alternatives for, from GitHub to lightweight automation platforms like Qovery. Escape the monolith and reclaim your velocity.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
Heroku
 minutes
Heroku vs. Kubernetes: A comprehensive comparison

Is the "Heroku Tax" draining your budget? Compare Heroku vs. Kubernetes in 2026. Learn how to solve complex orchestration challenges, like queue-based autoscaling and microservice sprawl, without the DevOps toil.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
 minutes
The complete guide to migrating from EKS to ECS

Is the EKS operational burden outweighing its benefits? Learn how to migrate from EKS to ECS, the technical trade-offs of AWS-native orchestration, and how to get ECS-level simplicity without losing Kubernetes power.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
DevOps
Platform Engineering
6
 minutes
Kubernetes vs. Docker: Escaping the complexity trap

Is the "Kubernetes Tax" killing your team’s velocity? Compare Docker vs. Kubernetes in 2026 and discover how to get production-grade orchestration with the "Git Push" simplicity of Docker, without the operational toil.

Morgan Perry
Co-founder
DevSecOps
 minutes
Inside Qovery’s security architecture: how we secure your cloud & Kubernetes infrastructure

Discover how Qovery bridges the gap between developers and infrastructure with a "security by design" approach. From federated identities and unique encryption keys to real-time audit logs and SOC2 Type 2 certification - see how we protect your data while eliminating vendor lock-in.

Kevin Pochat
Security & Compliance Engineer

It’s time to rethink
the way you do DevOps

Turn DevOps into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.