If you’re familiar with Helm and use several Helm charts to deploy everything on your stack, you certainly already have felt about missing lifecycles. By default, Helm provides Hooks to manage lifecycles. This is excellent when you’re the chart owner, as you can control it.
But something is missing. How do you manage lifecycles when you’re using a community chart? You have to fork the original Chart, add your hooks and maintain them over time (more or less depending on how customized your Hooks are). Quite boring, right?
Also, Hooks require a container to run your code as a job, so you have to create a container only for this purpose, store it on a registry, etc.
Finally, how do you handle exceptions, fallbacks, ensure your app works as expected (in addition to Kubernetes lifecycles)? There are no familiar ways to do that with Helm.
That’s why we decided to build something on top of Helm directly in the Engine, to add a common lifecycle mechanism.
Terraform Helm provider based
In another article, I was talking about why we removed Helm from Terraform. Even if the move was required, the way the Helm provider requested Chart configuration, was pretty good. So we decided to use something close to it with a struct.
Compared to the Helm chart, you can note some differences we support:
Direct YAML content in yaml_files_content, which is sometimes super convenient.
last_breaking_version_requiring_restart: allowing us to uninstall a chart before installing it once again when some major breaking changes are required by community charts (and for sure there are no data associated)
We then decided to create defaults values as it’s very frequent to have common ones:
check_prerequisites: ensuring everything is ok before doing any action
pre_exec: run pre exec code before running any action on a chart
exec: perform an action (deploy/delete) on a chart
on_deploy_failure: run code when an action failed
post_exec: run code after helm action
validate: ensure deployed applications are working as expected
Lifecycles
Let’s dig into what those lifecycles contain.
check_prerequisites
By default, we simply check the prerequisites, like the file permissions on helm values override files:
fn check_prerequisites(&self) -> Result
pre_exec
Pre exec is really useful for some charts, to pre-check/validate/update stuff before going further. Super useful for example for already deployed applications without Helm, and you want to give ownership to Helm by updating annotations (like AWS CNI). by default, nothing is done:
Obviously, this has to be adapted for any deployed solution.
Example of usage
Let’s try with a real use case. Here it’s the Prometheus Operator where we need to change the exec method to be able to manage lifecycle with CRDs (the uninstall phase):
We’ve been using this for production usage at Qovery for more than five months now. From an experienced Kubernetes point of view (+6y of XP on the Kubernetes ecosystem), I finally feel confident on helm chart deployments.
We don’t know if we will move out to a dedicated library. If we receive requests, we’ll consider it.
Share on :
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
How to automate environment sleeping and stop paying for idle Kubernetes resources
Scaling your deployments to zero is only half the battle. If your cluster autoscaler does not aggressively bin-pack and terminate the underlying worker nodes, you are still paying for idle metal. True environment sleeping requires tight integration between your ingress layer and your node provisioner to actually realize FinOps savings.
Mélanie Dallé
Senior Marketing Manager
Kubernetes
DevOps
6
minutes
April 21, 2026
10 best Kubernetes management tools for enterprise fleets in 2026
The biggest mistake enterprises make when evaluating Kubernetes management platforms is confusing infrastructure provisioning with Day-2 operations. Tools like Terraform or kOps are excellent for spinning up the underlying EC2 instances and networking, but they do absolutely nothing to prevent configuration drift, automate certificate rotation, or right-size your idle workloads once the cluster is actually running.
Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
Platform Engineering
6
minutes
April 21, 2026
10 best Red Hat OpenShift alternatives to reduce licensing costs
For years, Red Hat OpenShift has been the safe choice for heavily regulated, on-premise environments. It operates as a secure fortress. But in the public cloud, that fortress acts as an expensive prison. Paying proprietary per-core licensing fees on top of your standard AWS or GCP compute bill is a redundant "middleware tax." Escaping OpenShift requires decoupling your infrastructure from your developer experience by running standard, vanilla Kubernetes paired with an agentic control plane.
Morgan Perry
Co-founder
AI
Product
3
minutes
April 20, 2026
Qovery Skill for AI Agents: Deploy Apps in One Prompt
Use Qovery from Claude Code, OpenCode, Codex, and 20+ AI Coding agents
Romaric Philogène
CEO & Co-founder
Kubernetes
minutes
April 14, 2026
Stopping Kubernetes cloud waste: agentic automation for enterprise fleets
Agentic Kubernetes resource reclamation is the practice of using an autonomous control plane to continuously identify, suspend, and delete idle infrastructure across a multi-cloud Kubernetes fleet. It replaces manual cleanup and reactive autoscaling with intent-based policies that act on business state, eliminating the configuration drift and cloud waste typical of unmanaged fleets.
Mélanie Dallé
Senior Marketing Manager
Platform Engineering
Kubernetes
DevOps
10
minutes
April 12, 2026
What is Kubernetes? The reality of Day-2 enterprise fleet orchestration
Kubernetes focuses on container orchestration, but the reality on the ground is far less forgiving. Provisioning a single cluster is a trivial Day-1 exercise. The true operational nightmare begins on Day 2. Teams that treat multi-cloud fleets like isolated pets inevitably face crushing YAML configuration drift, runaway AWS bills, and severe scaling bottlenecks.
Morgan Perry
Co-founder
Kubernetes
DevOps
5
minutes
April 12, 2026
Top 10 Rancher alternatives in 2026: beyond cluster management
Rancher solved the Day-1 problem of launching clusters across disparate bare-metal environments. But in 2026, launching clusters is no longer the bottleneck. The real failure point is Day-2: managing the operational chaos, security patching, and configuration drift on top of them. Rancher is a heavy, ops-focused fleet manager that completely ignores the application developer. If your goal is developer velocity and automated FinOps, you must graduate from basic fleet management to an intent-based Kubernetes Management Platform like Qovery.
Morgan Perry
Co-founder
AI
Compliance
Healthtech
minutes
April 10, 2026
Agentic AI infrastructure: moving beyond Copilots to autonomous operations
The shift from AI copilots to autonomous agents is redefining infrastructure requirements. Discover how to build secure, stateful, and compliant Agentic AI systems using Kubernetes, sandboxing, and observability while meeting EU AI Act standards