Blog
Engineering
7
minutes

How we extended Helm lifecycle with Rust

Helm has some limits, discover how we extended functionnalities with Rust
September 26, 2025
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

At Qovery, we're using Rust for the Qovery Engine, in charge of deploying Cloud-managed Kubernetes Clusters, databases, and customers' applications.

In order to deploy containers on Kubernetes, we're using Helm as it provides a lot of exciting features like:

  • Automatic rollback on failure
  • Consistency across deployments (manages create/update/delete)
  • Deployment history with manual rollback feature
  • It's one of the most used standards to deploy on Kubernetes, so we easily find existing charts
  • Safe lock deployments (denying parallel deployments)
  • And much more

If you’re familiar with Helm and use several Helm charts to deploy everything on your stack, you certainly already have felt about missing lifecycles. By default, Helm provides Hooks to manage lifecycles. This is excellent when you’re the chart owner, as you can control it.

But something is missing. How do you manage lifecycles when you’re using a community chart? You have to fork the original Chart, add your hooks and maintain them over time (more or less depending on how customized your Hooks are). Quite boring, right?

Also, Hooks require a container to run your code as a job, so you have to create a container only for this purpose, store it on a registry, etc.

Finally, how do you handle exceptions, fallbacks, ensure your app works as expected (in addition to Kubernetes lifecycles)? There are no familiar ways to do that with Helm.

That’s why we decided to build something on top of Helm directly in the Engine, to add a common lifecycle mechanism.

Terraform Helm provider based

In another article, I was talking about why we removed Helm from Terraform. Even if the move was required, the way the Helm provider requested Chart configuration, was pretty good. So we decided to use something close to it with a struct.

Here is how we declare a chart to be deployed:

let external_dns = CommonChart {
chart_info: ChartInfo {
name: "externaldns".to_string(),
path: chart_path("common/charts/external-dns"),
values_files: vec![chart_path("chart_values/external-dns.yaml")],
values: vec![
// resources limits
ChartSetValue {
key: "resources.limits.cpu".to_string(),
value: "50m".to_string(),
},
ChartSetValue {
key: "resources.requests.cpu".to_string(),
value: "50m".to_string(),
},
ChartSetValue {
key: "resources.limits.memory".to_string(),
value: "50Mi".to_string(),
},
ChartSetValue {
key: "resources.requests.memory".to_string(),
value: "50Mi".to_string(),
},
],
..Default::default()
},
};
}

Pretty simple for a basic Chart, right?

Structure

#[derive(Clone)]
pub enum HelmAction {
Deploy,
Destroy,
Skip,
}

#[derive(Copy, Clone)]
pub enum HelmChartNamespaces {
KubeSystem,
Prometheus,
Logging,
CertManager,
NginxIngress,
Qovery,
}

pub struct ChartInfo {
pub name: String,
pub path: String,
pub namespace: HelmChartNamespaces,
pub action: HelmAction,
pub atomic: bool,
pub force_upgrade: bool,
pub last_breaking_version_requiring_restart: None,
pub timeout: String,
pub dry_run: bool,
pub wait: bool,
pub values: Vec,
pub values_files: Vec,
pub yaml_files_content: Vec,
}

Compared to the Helm chart, you can note some differences we support:

  • Direct YAML content in yaml_files_content, which is sometimes super convenient.
  • last_breaking_version_requiring_restart: allowing us to uninstall a chart before installing it once again when some major breaking changes are required by community charts (and for sure there are no data associated)

We then decided to create defaults values as it’s very frequent to have common ones:

impl Default for ChartInfo {
fn default() -> ChartInfo {
ChartInfo {
name: "undefined".to_string(),
path: "undefined".to_string(),
namespace: KubeSystem,
action: Deploy,
atomic: true,
force_upgrade: false,
last_breaking_version_requiring_restart: None,
timeout: "180s".to_string(),
dry_run: false,
wait: true,
values: Vec::new(),
values_files: Vec::new(),
yaml_files_content: vec![],
}
}
}

Trait

Here is starting the exciting part. We’re using an interface (called trait in Rust):

pub trait HelmChart: Send {
fn run(&self, kubernetes_config: &Path, envs: &[(String, String)]) -> Result, SimpleError> {
info!("prepare and deploy chart {}", &self.get_chart_info().name);
let payload = self.check_prerequisites()?;
let payload = self.pre_exec(&kubernetes_config, &envs, payload)?;
let payload = match self.exec(&kubernetes_config, &envs, payload.clone()) {
Ok(payload) => payload,
Err(e) => {
error!(
"Error while deploying chart: {:?}",
e.message.clone().expect("no error message provided")
);
self.on_deploy_failure(&kubernetes_config, &envs, payload)?;
return Err(e);
}
};
let payload = self.post_exec(&kubernetes_config, &envs, payload)?;
let payload = self.validate(&kubernetes_config, &envs, payload)?;
Ok(payload)
}
}

As you can see there, there are several steps:

  • check_prerequisites: ensuring everything is ok before doing any action
  • pre_exec: run pre exec code before running any action on a chart
  • exec: perform an action (deploy/delete) on a chart
  • on_deploy_failure: run code when an action failed
  • post_exec: run code after helm action
  • validate: ensure deployed applications are working as expected

Lifecycles

Let’s dig into what those lifecycles contain.

check_prerequisites

By default, we simply check the prerequisites, like the file permissions on helm values override files:

fn check_prerequisites(&self) -> Result, SimpleError> {
let chart = self.get_chart_info();
for file in chart.values_files.iter() {
match fs::metadata(file) {
Ok(_) => {}
Err(e) => {
return Err(SimpleError {
kind: SimpleErrorKind::Other,
message: Some(format!(
"Can't access helm chart override file {} for chart {}. {:?}",
file, chart.name, e
)),
})
}
}
}

Ok(None)
}
}

pre_exec

Pre exec is really useful for some charts, to pre-check/validate/update stuff before going further. Super useful for example for already deployed applications without Helm, and you want to give ownership to Helm by updating annotations (like AWS CNI). by default, nothing is done:

fn pre_exec(
&self,
_kubernetes_config: &Path,
_envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
Ok(payload)
}

exec

Exec is where we define the Chart action to perform Deploy/Destroy/Skip:

fn exec(
&self,
kubernetes_config: &Path,
envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
let environment_variables = envs.iter().map(|x| (x.0.as_str(), x.1.as_str())).collect();
match self.get_chart_info().action {
HelmAction::Deploy => {
helm_exec_upgrade_with_chart_info(kubernetes_config, &environment_variables, self.get_chart_info())?
}
HelmAction::Destroy => {
let chart_info = self.get_chart_info();
match is_chart_deployed(
kubernetes_config,
environment_variables.clone(),
Some(get_chart_namespace(chart_info.namespace.clone()).as_str()),
chart_info.name.clone(),
) {
Ok(deployed) => {
if deployed {
helm_exec_uninstall_with_chart_info(kubernetes_config, &environment_variables, chart_info)?
}
}
Err(e) => return Err(e),
};
}
HelmAction::Skip => {}
}
Ok(payload)
}
}

on_deploy_failure

On failure, by default we collect events in order to debug when something goes wrong:

fn on_deploy_failure(
&self,
kubernetes_config: &Path,
envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
// print events for future investigation
let environment_variables: Vec<(&str, &str)> = envs.iter().map(|x| (x.0.as_str(), x.1.as_str())).collect();
kube_get_events(
kubernetes_config,
get_chart_namespace(self.get_chart_info().namespace).as_str(),
environment_variables,
)?;
Ok(payload)
}
}

post_exec

Run actions after an exec (deploy/uninstall/skip):

fn post_exec(
&self,
_kubernetes_config: &Path,
_envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
Ok(payload)
}

validate

Ensure the chart has correctly deployed elements and we validate there the service is working as expected:

fn validate(
&self,
_kubernetes_config: &Path,
_envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
match chart_info.action {
HelmAction::Deploy => {},
_ = {},
};
Ok(payload)
}
}

Obviously, this has to be adapted for any deployed solution.

Example of usage

Let’s try with a real use case. Here it’s the Prometheus Operator where we need to change the exec method to be able to manage lifecycle with CRDs (the uninstall phase):

#[derive(Default)]
pub struct PrometheusOperatorConfigChart {
pub chart_info: ChartInfo,
}

impl HelmChart for PrometheusOperatorConfigChart {
fn get_chart_info(&self) -> &ChartInfo {
&self.chart_info
}

fn exec(
&self,
kubernetes_config: &Path,
envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
let environment_variables: Vec<(&str, &str)> = envs.iter().map(|x| (x.0.as_str(), x.1.as_str())).collect();
let chart_info = self.get_chart_info();
match chart_info.action {
HelmAction::Deploy => {
if let Err(e) = helm_destroy_chart_if_breaking_changes_version_detected(
kubernetes_config,
&environment_variables,
chart_info,
) {
warn!(
"error while trying to destroy chart if breaking change is detected: {:?}",
e.message
);
}

helm_exec_upgrade_with_chart_info(kubernetes_config, &environment_variables, chart_info)?
}
HelmAction::Destroy => {
let chart_info = self.get_chart_info();
match is_chart_deployed(
kubernetes_config,
environment_variables.clone(),
Some(get_chart_namespace(chart_info.namespace.clone()).as_str()),
chart_info.name.clone(),
) {
Ok(deployed) => {
if deployed {
let prometheus_crds = [
"prometheuses.monitoring.coreos.com",
"prometheusrules.monitoring.coreos.com",
"servicemonitors.monitoring.coreos.com",
"podmonitors.monitoring.coreos.com",
"alertmanagers.monitoring.coreos.com",
"thanosrulers.monitoring.coreos.com",
];
helm_exec_uninstall_with_chart_info(kubernetes_config, &environment_variables, chart_info)?;
for crd in &prometheus_crds {
kubectl_exec_delete_crd(kubernetes_config, crd, environment_variables.clone())?;
}
}
}
Err(e) => return Err(e),
};
}
HelmAction::Skip => {}
}
Ok(payload)
}
}

Final word

We’ve been using this for production usage at Qovery for more than five months now. From an experienced Kubernetes point of view (+6y of XP on the Kubernetes ecosystem), I finally feel confident on helm chart deployments.

We don’t know if we will move out to a dedicated library. If we receive requests, we’ll consider it.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Romaric Philogène
CEO & Co-founder
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder
Kubernetes
7
 minutes
Kubernetes multi-cluster: the Day-2 enterprise strategy

A multi-cluster Kubernetes architecture distributes application workloads across geographically separated clusters rather than a single environment. This strategy strictly isolates failure domains, ensures regional data compliance, and guarantees global high availability, but demands centralized Day-2 control to prevent exponential cloud costs and operational sprawl.

Morgan Perry
Co-founder
Kubernetes
6
 minutes
Kubernetes observability at scale: cutting the noise in multi-cloud environments

Stop overpaying for Kubernetes observability. Learn how in-cluster monitoring and AI-driven troubleshooting with Qovery Observe can eliminate APM ingestion fees, reduce SRE bottlenecks, and make your cloud costs predictable.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Understanding CrashLoopBackOff: Fixing AI workloads on Kubernetes

Stop fighting CrashLoopBackOff on your AI deployments. Learn why traditional Kubernetes primitives fail large models and GPU workloads, and how to orchestrate AI infrastructure without shadow IT.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Platform Engineering
 minutes
Kubernetes multi-cluster architecture: solving day-2 fleet sprawl

Kubernetes multi-cluster management is the Day-2 operational practice of orchestrating applications, security, and configurations across geographically distributed clusters. Because native Kubernetes was designed for single-cluster orchestration, enterprise platform teams must implement a centralized control plane to prevent configuration drift and manage a global fleet without scaling manual toil.

Mélanie Dallé
Senior Marketing Manager
Engineering
Product
11
 minutes
How to achieve zero downtime on kubernetes: a Day-2 architecture guide

Achieving zero-downtime deployments on Kubernetes requires more than running multiple pods. It demands a standardized architecture utilizing Pod Disruption Budgets (PDBs), precise liveness and readiness probes, pod anti-affinity, and graceful termination handling. At an enterprise scale, these configurations must be enforced via a centralized control plane to prevent catastrophic configuration drift.

Pierre Mavro
CTO & Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.