Skip to main content

Overview

Qovery-managed EKS provides production-ready Kubernetes clusters on AWS with zero configuration. Qovery handles everything: cluster creation, networking, scaling, monitoring, and ongoing maintenance. Perfect for: Teams who want AWS but don’t want to manage Kubernetes infrastructure.

Karpenter-Powered Auto-Scaling

All Qovery-managed EKS clusters use Karpenter for intelligent node provisioning:
  • 60-90% cost savings through spot instances and consolidation
  • Fast scaling in seconds (not minutes)
  • Smart instance selection from your chosen instance types
  • Automatic workload optimization to minimize costs
You select multiple instance types during setup (e.g., t3.medium, t3.large, m5.xlarge). Karpenter then automatically picks the best option based on your workload requirements, spot availability, and cost.

Create Your First Cluster

1

Open Qovery Console

  1. Go to Organization SettingsClusters
  2. Click Create Cluster
  3. Select AWS
2

Configure Cluster

  • Name: e.g., production-eks
  • Region: Choose closest to your users (e.g., us-east-1)
3

Connect AWS Account

Choose how to connect your AWS account:For detailed instructions with policy information, see AWS Installation Guide.
4

Select Instance Types

Select ALL instance types you want Karpenter to choose from:Recommended selections:
  • t3.medium (2 vCPU, 4GB)
  • t3.large (2 vCPU, 8GB)
  • t3.xlarge (4 vCPU, 16GB)
  • t3.2xlarge (8 vCPU, 32GB)
  • m5.large (2 vCPU, 8GB)
  • m5.xlarge (4 vCPU, 16GB)
  • m6i.large (2 vCPU, 8GB)
  • m6i.xlarge (4 vCPU, 16GB)
Karpenter will automatically select the best instance type from your list based on:
  • Current workload requirements
  • Spot instance availability
  • Cost optimization
More instance types = better optimization and availability!
Enable spot instances for 60-90% cost savings. Karpenter automatically falls back to on-demand if spot is unavailable.
5

Create

Click Create and Deploy - your cluster will be ready in 20-30 minutes!

Need detailed instructions?

See the complete AWS installation guide with screenshots and troubleshooting

What Qovery Creates

  • EKS Cluster - Latest stable Kubernetes
  • VPC & Networking - Public/private subnets across 3 AZs
  • NAT Gateways - Secure internet access
  • Security Groups & IAM Roles - Pre-configured best practices
  • Karpenter - Intelligent auto-scaling (save up to 60% on costs)
  • AWS Load Balancer Controller - Automatic ingress management
  • EBS CSI Driver - Persistent volume support
  • Metrics Server - Resource monitoring
  • Qovery Agent - Observability and management

Karpenter Auto-Scaling

Qovery uses Karpenter to automatically provision optimal EC2 instances:
  • Scales nodes within seconds based on workload demands
  • Consolidates workloads to reduce costs
  • Handles spot instance interruptions gracefully
  • Supports wide range of instance types (t3, m5, m6i, c5, r5, GPU)
  • Mix of on-demand and spot instances for reliability
Configure Karpenter node pools and instance types in Cluster Settings after creation. Learn more in Cluster Configuration.

Configuration Options

Cluster Settings

Instance Types:
  • General Purpose: t3, m5, m6i, m6g (Graviton)
  • Compute Optimized: c5, c6i, c6g (Graviton)
  • Memory Optimized: r5, r6i, r6g (Graviton)
  • GPU Instances: g4dn, g5, p3, p4 (for AI/ML workloads)
Auto-Scaling:
  • Min nodes: 1
  • Max nodes: 100
  • Karpenter automatically provisions optimal instances
Spot Instances:
  • Enable Spot instances for cost savings (60-90% off)
  • Qovery handles interruptions gracefully
  • Mix of Spot and On-Demand for reliability

GPU Support

Karpenter clusters support GPU-enabled instances for AI/ML workloads, scientific computing, and graphics-intensive applications. Available GPU Instance Types:
Instance TypeGPUvCPUMemoryUse Case
g4dn.xlarge1x NVIDIA T4416 GBML inference, video processing
g4dn.2xlarge1x NVIDIA T4832 GBML training, rendering
g5.xlarge1x NVIDIA A10G416 GBHigh-performance ML inference
g5.2xlarge1x NVIDIA A10G832 GBML training, graphics workloads
p3.2xlarge1x NVIDIA V100861 GBDeep learning training
p4d.24xlarge8x NVIDIA A100961152 GBLarge-scale ML training
Setup Steps:
1

Include GPU Instance Types

When creating your cluster, select GPU instance types (g4dn, g5, p3, p4) in addition to your standard instances.
2

Configure Application

In your application settings, configure the GPU resource requirement:
resources:
  gpu: 1  # Number of GPUs needed
See Application GPU Configuration for details.
3

Karpenter Auto-Provisioning

Karpenter will automatically provision GPU instances when your application requests GPU resources, and deprovision them when not needed.
GPU Instance Costs: GPU instances are significantly more expensive than standard instances. Ensure your workload actually utilizes GPU acceleration before enabling.Example Pricing (us-east-1):
  • g4dn.xlarge: ~$0.526/hour
  • g5.xlarge: ~$1.006/hour
  • p3.2xlarge: ~$3.06/hour
Use Spot instances for GPU workloads to save 60-90% on costs. Karpenter handles spot interruptions gracefully, making them suitable for batch processing and training workloads.

Network Settings

VPC Configuration:
  • CIDR: /16 (65,536 IPs)
  • 3 public subnets (load balancers)
  • 3 private subnets (pods)
  • NAT Gateways per AZ
Access:
  • Public endpoint (default)
  • Private endpoint (enterprise feature)
  • VPN access for private endpoints
For detailed AWS installation guide, see AWS Installation.