Default Autoscaling ModeAll AWS EKS clusters on Qovery now use Karpenter for autoscaling, which automatically launches the right compute resources to handle your cluster’s applications.
Creating an AWS EKS Cluster
Create the Cluster
1
Select AWS as Hosting Mode
Click on
AWS as hosting mode and then Qovery Managed option.In the Create Cluster window enter:- Cluster name: enter the name of your choice for your cluster.
- Description: enter a description to identify better your cluster.
- Production cluster: select this option if your cluster will be used for production. Note: Karpenter is currently only available for non-production clusters.
- Region: select the geographical area in which you want your cluster to be hosted.
- Credentials: select one of the existing cloud provider credentials or create new credentials.
Next.2
Set Resources
In the
Set Resources window, select:- Karpenter: Toggle the switch to enable Karpenter on your AWS EKS cluster
- Node disk size (GB): Specify the disk capacity allocated per worker node, determining the amount of data each node can store. The minimum value is 20GB.
- Instance types scopes: By editing it, you can apply different filters to the node architectures, categories, families, and sizes. On the right, you can view all the instance types that match the applied filters. This means Karpenter will be able to spawn nodes on any of the listed instance types.
- Architectures: by default both
AMD64andARM64architectures are selected. - Default build architecture: by default
AMD64. If you build your application with the Qovery CI, your application will be built using this architecture by default. - Families: by default all families are selected.
- Sizes: by default all sizes are selected.
- Architectures: by default both
- Spot instances: In order to reduce even more your costs, you can also enable the spot instances on your clusters. Spot instances cost up to 90% less compared to On-Demand prices. But keep in mind that spot instances can be terminated by the cloud provider at any time. Check this documentation for more information. Even if this flag is enabled, the statefulsets and Nginx controller won’t run on spot instances.
- Enable GPU Nodepool configuration: If you want to run GPU workloads on your cluster, you can enable this option to create a dedicated nodepool for GPU instances. You will then be able to select the GPU instance types you want to use on this nodepool. To enable spot instances, toggle the spot instance flag.
Next.3
Configure Features
In the
Features step, select the features you want to enable on your cluster.If you want to manage the network layer of your cluster by yourself, you can switch VPC mode to Deploy on my existing VPC to use your own VPC instead of the one provided by Qovery.- VPC managed by Qovery
- Use your existing VPC
Static IP
By default, when your cluster is created, its worker nodes are allocated public IP addresses, which are used for external communication. For improved security and control, the Static IP feature allows you to ensure that outbound traffic from your cluster uses specific IP addresses.Here is what will be deployed on your cluster:- Nat Gateways
- Elastic IPs
- Private subnets
AWS:- On your AWS account, select the VPC service.
- On the left menu, you’ll find Elastic IP addresses. Once on it, in the Allocated IPv4 address column, you’ll have your public IPs.
If you work in a sensitive business area such as financial technology, enabling the Static IP feature can help fulfil the security requirements of some of the external services you use, therefore making it easier for you to get whitelisted by them.This feature has been activated by default. Since February 1, 2024, AWS charge public IPv4 Addresses. Disabling it may cost you more, depending on the number of nodes in your cluster. Check this link for more information.
Custom VPC Subnet
Virtual Private Cloud (VPC) peering allows you to set up a connection between your Qovery VPC and another VPC on your AWS account. This way, you can access resources stored on your AWS VPC directly from your Qovery applications.A VPC can only be used if it has at least one range of IP addresses called a subnet. When you create a cluster, Qovery automatically picks a default subnet for it. However, to perform VPC peering, you may want to define which specific VPC subnet you want to use, so that you can avoid any conflicting settings. To do so, you can enable the Custom VPC Subnet feature on your cluster. For more information on how to set up VPC peering, see our dedicated tutorial.4
Create and Install
In the
Ready to install your cluster window, check that the services needed to install your cluster are correct.You can now press the Create and Install button.Your cluster is now displayed in your organization settings, featuring the Installing... status (orange status). Once your cluster is properly installed, its status turns to green and you will be able to deploy your applications on it.You can follow the execution of the action via the cluster status and/or by accessing the Cluster LogsMigrating from AWS with auto-scaler to AWS with Karpenter
Requirements
1
Update IAM Permissions
A SQS queue will be created. Update the IAM permissions of the Qovery user: make sure to use the latest version here to add the permission on SQS.
2
Enable Instance Metadata Service Version 2
Your cluster should use the Instance Metadata Service Version 2: make sure to set the
aws.eks.ec2.metadata_imds cluster advanced settings to required if not already set (more details here).
Redeploy your cluster before enabling Karpenter to apply the advanced setting change.3
Configure Additional Subnets (Existing VPC Only)
If you have configured an existing vpc for your cluster, you’ll need to indicate some additional subnets dedicated to fargate:
- those subnets must be private
- they must all have access to internet through a NAT gateway
4
Update Daemonsets
If you have deployed some daemonsets, you must update their definitions to enable them to run on every node of the future nodepools (stable & default). Everything is explained in our guide
Enable Karpenter
You can easily activate Karpenter on your non-production existing cluster by following this process:1
Open Qovery Console
Open your Qovery Console.
2
Navigate to Cluster Page
On the left menu bar, click on the Cluster page.
3
Access Cluster Settings
To access your cluster settings, click on the wheel button.
4
Activate Karpenter
Access to
Resources section and switch on the toggle Activate Karpenter5
Update Your Cluster
Update your cluster by selecting the action
Update from the drop-down menu.6
Verify and Add Instance Types
Once the update is complete, your cluster will be running on Karpenter. By default, only the instance types selected when you created your AWS cluster with the auto-scaler will be configured. You can add additional instance types by editing the instance types in the resources section.
7
Redeploy Environments
Please redeploy all your environments of your cluster: this will automatically update your services configuration to run them on the appropriate nodepool.
Managing your Cluster Settings
To manage the settings of an existing cluster:1
Open Qovery Console
Open your Qovery Console.
2
Navigate to Cluster Page
On the left menu bar, click on the Cluster page.
3
Access Cluster Settings
To access your cluster settings, click on the wheel button.
General
TheGeneral tab allows you to define high-level information on your cluster:
| Item | Description |
|---|---|
| Cluster Name | To edit the name of your cluster. |
| Description | To enter or edit the description of your cluster. |
| Production Cluster | To enter or edit the production flag of your cluster. |
Credentials
Here you can manage here the cloud provider credentials associated with your cluster. If you need to change the credentials:- generate a new set of credentials on your cloud provider (Procedure for AWS account)
- create the new credential on the Qovery by opening the drop-down and selecting “New Credentials”
Resources
Qovery deploys two node pools by default:- Stable node pool: Used for single instances and internal Qovery applications. For example, any containerized databases or application having the number of minimum instances set to 1, will be deployed on this nodepool. On this nodepool the consolidation is deactivated by default.
- Default node pool: Designed to handle general workloads and serves as the foundation for deploying most applications.
Settings for nodepools:
- Instance types: Define the list of instance types that can be used. (Shared for Stable and Default nodepools)
- Spot instances: Enable or disable spot instances. (Shared across the three nodepools)
- Node disk size (GB): Specify the disk capacity allocated per worker node, determining the amount of data each node can store. (Shared for Stable and Default nodepools)
- Consolidation schedule (Stable nodepool only): Optimizes resource usage by consolidating workloads onto fewer nodes. This feature is not available for the default nodepool, as consolidation can happen at any time. We recommend enabling this option; otherwise, nodes will never be consolidated, leading to unnecessary infrastructure costs.
- Node pool limits: Configure CPU and memory limits to ensure nodes stay within defined resource constraints, preventing excessive costs.
Mirroring registry
In this tab, you will see that a container registry already exist (calledregistry-{$UIID}).
This is your cloud provider container registry used by Qovery to manage the deployment of your applications by mirroring the docker images.
The credentials configured on this registry are the one used to create the cluster. But you can still update them if you prefer to manage them separately (dedicated pair of creds just to access the registry).
Check this link for more information.
Features
TheFeatures tab in your cluster settings allows you to check if the Static IP, Custom VPC subnet, Deploy on existing VPC features are enabled on your cluster. The enabled features cannot be changed after the creation of the cluster.
Network
TheNetwork tab in your cluster settings allows you to update your Qovery VPC route table so that you can perform VPC peering. For step-by-step guidelines on how to set up VPC peering, see our dedicated tutorial.
Defining cluster node constraints to run your Services
Define if your service can run on an on-demand instance
When using spot instances in your cluster, you may want to ensure that certain critical services, such as databases or essential applications, are always deployed on on-demand instances. To specify that a service should be deployed on anon-demand instance, manually set the deployment.affinity.node.required advanced setting to:
Define the instance type to run your service
In some cases, you may need to ensure that a specific service runs on a particular instance type to meet performance, compliance, or cost requirements. For example, to assign a service to the t3a.xlarge instance type, manually set thedeployment.affinity.node.required advanced setting to:
The specified instance type must be included in the list of instance types
defined in the NodePool configuration.
Change the node pool of your service when using Helm
When using Helm, you can update theaffinity field in your values.yaml file to target a specific node pool for your service. For example you can switch from the default to stable nodepool: