Skip to main content

Overview

Bring Your Own Kubernetes (BYOK) allows you to connect your existing EKS cluster to Qovery. You maintain full control over your cluster while Qovery manages application deployments.

Prerequisites

Existing EKS cluster (Kubernetes 1.24+)
kubectl access with cluster-admin permissions
AWS credentials for Qovery to access your cluster
EBS CSI driver installed
Load Balancer Controller or Nginx Ingress

Setup

1

Get Qovery Agent Manifests

In Qovery Console:
  1. Settings → Clusters → Add Cluster
  2. Select “Bring Your Own Kubernetes”
  3. Choose “AWS EKS”
  4. Download Helm values or kubectl manifests
2

Install Qovery Agent

Using Helm (recommended):
helm repo add qovery https://helm.qovery.com
helm repo update

helm install qovery-agent qovery/qovery-agent \
  --namespace qovery \
  --create-namespace \
  --values qovery-values.yaml
Or using kubectl:
kubectl apply -f qovery-agent.yaml
3

Verify Connection

Check agent status:
kubectl get pods -n qovery
# qovery-agent-* should be Running
In Qovery Console, cluster should show as “Connected”
4

Deploy Applications

Start deploying applications to your BYOK cluster

What Qovery Installs

Qovery Agent:
  • Manages application deployments
  • Communicates with Qovery Control Plane
  • Handles secrets and configuration
Optional Components (if not present):
  • Nginx Ingress Controller
  • Cert-Manager (for SSL certificates)
  • External-DNS (for domain management)
  • Metrics Server

Requirements

Kubernetes Version

  • Minimum: 1.24
  • Recommended: 1.27+
  • Maximum: 1.29

Required Addons

  • Storage
  • Load Balancer
  • Metrics
EBS CSI Driver:
# Install EBS CSI driver
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.25"
Storage Class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp3
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
  type: gp3

IAM Permissions

Qovery needs IAM permissions for:
  • Creating/managing Load Balancers
  • Managing Route 53 DNS records (if using)
  • ECR access (if using ECR)
Example IAM policy:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "elasticloadbalancing:*",
        "ec2:Describe*",
        "route53:*",
        "acm:Describe*",
        "acm:List*"
      ],
      "Resource": "*"
    }
  ]
}

Cluster Configuration

Resource Requirements

Minimum:
  • 2 nodes (t3.medium or larger)
  • 4 vCPUs total
  • 8 GB RAM total
Recommended:
  • 3+ nodes across multiple AZs
  • Auto-scaling enabled
  • Mix of On-Demand and Spot instances

Networking

VPC Requirements:
  • Private subnets for pods
  • Public subnets for load balancers
  • NAT Gateway or NAT instance
  • Internet Gateway
Security Groups:
  • Allow traffic from load balancers to nodes
  • Allow pod-to-pod communication
  • Allow Qovery agent outbound to Qovery API

DNS Configuration

Option 1: External-DNS (automated)
helm install external-dns bitnami/external-dns \
  --set provider=aws \
  --set aws.zoneType=public \
  --set txtOwnerId=my-cluster
Option 2: Manual DNS management
  • Create DNS records manually for each application
  • Point to load balancer DNS name

Best Practices

Separate Namespaces

  • Use dedicated namespace for Qovery (qovery)
  • Separate namespaces per environment
  • Apply resource quotas
  • Network policies for isolation

Access Control

  • Create dedicated service account for Qovery
  • Use RBAC for least privilege
  • Rotate credentials regularly
  • Audit access logs

High Availability

  • Multi-AZ node distribution
  • Multiple replicas for Qovery agent
  • Pod disruption budgets
  • Regular backups

Monitoring

  • Enable CloudWatch Container Insights
  • Set up alerts for Qovery agent
  • Monitor cluster resource usage
  • Track application health

Troubleshooting

Solutions:
  • Verify agent pods are running: kubectl get pods -n qovery
  • Check agent logs: kubectl logs -n qovery -l app=qovery-agent
  • Ensure outbound internet access from cluster
  • Verify API token is correct
Solutions:
  • Check node capacity and resources
  • Verify storage class exists and works
  • Ensure load balancer controller is working
  • Check for network policy blocking traffic
Solutions:
  • Verify AWS Load Balancer Controller is installed
  • Check IAM permissions for load balancer creation
  • Ensure proper subnet tags (kubernetes.io/role/elb)
  • Review controller logs

Next Steps