Overview
How to integrate Datadog with Kubernetes on Qovery. While Qovery will soon provide basic metrics on apps resources usage, you might need a more advanced view on what happens on your infrastructure. There are many solutions on the market, one of them being Datadog. Datadog is one of the leading platforms for monitoring and observability, and it is pretty easy to integrate it with Qovery.Prerequisites
Before you begin, this guide assumes the following:- You have a Qovery cluster running
- You have a dedicated Qovery project and environment to deploy Datadog (example: Project=Tooling, Environment=Production)
- You have a Datadog account
- You have already created a Datadog API Key here:
https://app.datadoghq.<region>/organization-settings/api-keys

Installation
In this tutorial, we will install the Datadog agent on a Qovery cluster to gather metrics about infrastructure and applications.This tutorial is based on a specific version of Datadog. We have created it to assist our users, but Qovery is not responsible for any configuration issues—please contact Datadog support.
Step 1: Add the Datadog Helm Repository
Step 2: Create the Datadog Service in Qovery
Create Helm Service
In your environment:
- Click Create → Helm Chart
- Configure:
- Application name:
Datadog - Helm source:
Helm repository - Repository:
Datadog - Chart name:
datadog - Version:
3.49.5(or latest) - Allow cluster-wide resources: ✔️
- Application name:
Step 3: Store the Datadog API Key as Secret
Step 4: Deploy the Chart
Step 5: Verify Setup on Datadog
Advanced Configuration
For more advanced Datadog configuration, you can extend thevalues.yaml through the Override as file section:
Instrumenting Your Applications
To enable APM for your applications, add these environment variables in Qovery:Troubleshooting
Agent Not Starting
Agent Not Starting
Problem: Datadog agent pods crash or fail to startSolutions:
- Check API key is valid:
kubectl logs -n qovery datadog-agent-xxx - Verify secret exists:
kubectl get secret -n qovery - Check resource limits (may need more memory)
- Review values.yaml for syntax errors
No Metrics in Datadog
No Metrics in Datadog
Problem: Cluster appears in Datadog but no metricsSolutions:
- Wait 5-10 minutes for initial data
- Verify agent is scraping: Check agent logs
- Ensure correct
site(datadoghq.com vs datadoghq.eu) - Check firewall/network policies allow outbound to Datadog
Application logs not appearing in Datadog when Karpenter is enabled
Application logs not appearing in Datadog when Karpenter is enabled
Problem: On a cluster with Karpenter enabled, some applications have their logs properly visible in Datadog but the logs are missing for some other applications.Reason: The Datadog agent DaemonSet likely has node selectors or taints/tolerations that prevent it from running on the stable nodes where some of your services have been scheduled. Services with only one pod typically run on the stable node pool, which explains why logs from these services are missing when Karpenter is enabled.Solutions:After updating your Datadog Helm chart with these tolerations, the agent should be able to collect logs from services running on both the default and stable node pools.
- Option 1 - tolerate all taints (recommended):
- Option 2 - Specific node pool tolerations If you prefer to be more selective, use these specific tolerations:



