Blog
Engineering
7
minutes

How we extended Helm lifecycle with Rust

Helm has some limits, discover how we extended functionnalities with Rust
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

At Qovery, we're using Rust for the Qovery Engine, in charge of deploying Cloud-managed Kubernetes Clusters, databases, and customers' applications.

In order to deploy containers on Kubernetes, we're using Helm as it provides a lot of exciting features like:

  • Automatic rollback on failure
  • Consistency across deployments (manages create/update/delete)
  • Deployment history with manual rollback feature
  • It's one of the most used standards to deploy on Kubernetes, so we easily find existing charts
  • Safe lock deployments (denying parallel deployments)
  • And much more

If you’re familiar with Helm and use several Helm charts to deploy everything on your stack, you certainly already have felt about missing lifecycles. By default, Helm provides Hooks to manage lifecycles. This is excellent when you’re the chart owner, as you can control it.

But something is missing. How do you manage lifecycles when you’re using a community chart? You have to fork the original Chart, add your hooks and maintain them over time (more or less depending on how customized your Hooks are). Quite boring, right?

Also, Hooks require a container to run your code as a job, so you have to create a container only for this purpose, store it on a registry, etc.

Finally, how do you handle exceptions, fallbacks, ensure your app works as expected (in addition to Kubernetes lifecycles)? There are no familiar ways to do that with Helm.

That’s why we decided to build something on top of Helm directly in the Engine, to add a common lifecycle mechanism.

Terraform Helm provider based

In another article, I was talking about why we removed Helm from Terraform. Even if the move was required, the way the Helm provider requested Chart configuration, was pretty good. So we decided to use something close to it with a struct.

Here is how we declare a chart to be deployed:

let external_dns = CommonChart {
chart_info: ChartInfo {
name: "externaldns".to_string(),
path: chart_path("common/charts/external-dns"),
values_files: vec![chart_path("chart_values/external-dns.yaml")],
values: vec![
// resources limits
ChartSetValue {
key: "resources.limits.cpu".to_string(),
value: "50m".to_string(),
},
ChartSetValue {
key: "resources.requests.cpu".to_string(),
value: "50m".to_string(),
},
ChartSetValue {
key: "resources.limits.memory".to_string(),
value: "50Mi".to_string(),
},
ChartSetValue {
key: "resources.requests.memory".to_string(),
value: "50Mi".to_string(),
},
],
..Default::default()
},
};
}

Pretty simple for a basic Chart, right?

Structure

#[derive(Clone)]
pub enum HelmAction {
Deploy,
Destroy,
Skip,
}

#[derive(Copy, Clone)]
pub enum HelmChartNamespaces {
KubeSystem,
Prometheus,
Logging,
CertManager,
NginxIngress,
Qovery,
}

pub struct ChartInfo {
pub name: String,
pub path: String,
pub namespace: HelmChartNamespaces,
pub action: HelmAction,
pub atomic: bool,
pub force_upgrade: bool,
pub last_breaking_version_requiring_restart: None,
pub timeout: String,
pub dry_run: bool,
pub wait: bool,
pub values: Vec,
pub values_files: Vec,
pub yaml_files_content: Vec,
}

Compared to the Helm chart, you can note some differences we support:

  • Direct YAML content in yaml_files_content, which is sometimes super convenient.
  • last_breaking_version_requiring_restart: allowing us to uninstall a chart before installing it once again when some major breaking changes are required by community charts (and for sure there are no data associated)

We then decided to create defaults values as it’s very frequent to have common ones:

impl Default for ChartInfo {
fn default() -> ChartInfo {
ChartInfo {
name: "undefined".to_string(),
path: "undefined".to_string(),
namespace: KubeSystem,
action: Deploy,
atomic: true,
force_upgrade: false,
last_breaking_version_requiring_restart: None,
timeout: "180s".to_string(),
dry_run: false,
wait: true,
values: Vec::new(),
values_files: Vec::new(),
yaml_files_content: vec![],
}
}
}

Trait

Here is starting the exciting part. We’re using an interface (called trait in Rust):

pub trait HelmChart: Send {
fn run(&self, kubernetes_config: &Path, envs: &[(String, String)]) -> Result, SimpleError> {
info!("prepare and deploy chart {}", &self.get_chart_info().name);
let payload = self.check_prerequisites()?;
let payload = self.pre_exec(&kubernetes_config, &envs, payload)?;
let payload = match self.exec(&kubernetes_config, &envs, payload.clone()) {
Ok(payload) => payload,
Err(e) => {
error!(
"Error while deploying chart: {:?}",
e.message.clone().expect("no error message provided")
);
self.on_deploy_failure(&kubernetes_config, &envs, payload)?;
return Err(e);
}
};
let payload = self.post_exec(&kubernetes_config, &envs, payload)?;
let payload = self.validate(&kubernetes_config, &envs, payload)?;
Ok(payload)
}
}

As you can see there, there are several steps:

  • check_prerequisites: ensuring everything is ok before doing any action
  • pre_exec: run pre exec code before running any action on a chart
  • exec: perform an action (deploy/delete) on a chart
  • on_deploy_failure: run code when an action failed
  • post_exec: run code after helm action
  • validate: ensure deployed applications are working as expected

Lifecycles

Let’s dig into what those lifecycles contain.

check_prerequisites

By default, we simply check the prerequisites, like the file permissions on helm values override files:

fn check_prerequisites(&self) -> Result, SimpleError> {
let chart = self.get_chart_info();
for file in chart.values_files.iter() {
match fs::metadata(file) {
Ok(_) => {}
Err(e) => {
return Err(SimpleError {
kind: SimpleErrorKind::Other,
message: Some(format!(
"Can't access helm chart override file {} for chart {}. {:?}",
file, chart.name, e
)),
})
}
}
}

Ok(None)
}
}

pre_exec

Pre exec is really useful for some charts, to pre-check/validate/update stuff before going further. Super useful for example for already deployed applications without Helm, and you want to give ownership to Helm by updating annotations (like AWS CNI). by default, nothing is done:

fn pre_exec(
&self,
_kubernetes_config: &Path,
_envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
Ok(payload)
}

exec

Exec is where we define the Chart action to perform Deploy/Destroy/Skip:

fn exec(
&self,
kubernetes_config: &Path,
envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
let environment_variables = envs.iter().map(|x| (x.0.as_str(), x.1.as_str())).collect();
match self.get_chart_info().action {
HelmAction::Deploy => {
helm_exec_upgrade_with_chart_info(kubernetes_config, &environment_variables, self.get_chart_info())?
}
HelmAction::Destroy => {
let chart_info = self.get_chart_info();
match is_chart_deployed(
kubernetes_config,
environment_variables.clone(),
Some(get_chart_namespace(chart_info.namespace.clone()).as_str()),
chart_info.name.clone(),
) {
Ok(deployed) => {
if deployed {
helm_exec_uninstall_with_chart_info(kubernetes_config, &environment_variables, chart_info)?
}
}
Err(e) => return Err(e),
};
}
HelmAction::Skip => {}
}
Ok(payload)
}
}

on_deploy_failure

On failure, by default we collect events in order to debug when something goes wrong:

fn on_deploy_failure(
&self,
kubernetes_config: &Path,
envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
// print events for future investigation
let environment_variables: Vec<(&str, &str)> = envs.iter().map(|x| (x.0.as_str(), x.1.as_str())).collect();
kube_get_events(
kubernetes_config,
get_chart_namespace(self.get_chart_info().namespace).as_str(),
environment_variables,
)?;
Ok(payload)
}
}

post_exec

Run actions after an exec (deploy/uninstall/skip):

fn post_exec(
&self,
_kubernetes_config: &Path,
_envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
Ok(payload)
}

validate

Ensure the chart has correctly deployed elements and we validate there the service is working as expected:

fn validate(
&self,
_kubernetes_config: &Path,
_envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
match chart_info.action {
HelmAction::Deploy => {},
_ = {},
};
Ok(payload)
}
}

Obviously, this has to be adapted for any deployed solution.

Example of usage

Let’s try with a real use case. Here it’s the Prometheus Operator where we need to change the exec method to be able to manage lifecycle with CRDs (the uninstall phase):

#[derive(Default)]
pub struct PrometheusOperatorConfigChart {
pub chart_info: ChartInfo,
}

impl HelmChart for PrometheusOperatorConfigChart {
fn get_chart_info(&self) -> &ChartInfo {
&self.chart_info
}

fn exec(
&self,
kubernetes_config: &Path,
envs: &[(String, String)],
payload: Option,
) -> Result, SimpleError> {
let environment_variables: Vec<(&str, &str)> = envs.iter().map(|x| (x.0.as_str(), x.1.as_str())).collect();
let chart_info = self.get_chart_info();
match chart_info.action {
HelmAction::Deploy => {
if let Err(e) = helm_destroy_chart_if_breaking_changes_version_detected(
kubernetes_config,
&environment_variables,
chart_info,
) {
warn!(
"error while trying to destroy chart if breaking change is detected: {:?}",
e.message
);
}

helm_exec_upgrade_with_chart_info(kubernetes_config, &environment_variables, chart_info)?
}
HelmAction::Destroy => {
let chart_info = self.get_chart_info();
match is_chart_deployed(
kubernetes_config,
environment_variables.clone(),
Some(get_chart_namespace(chart_info.namespace.clone()).as_str()),
chart_info.name.clone(),
) {
Ok(deployed) => {
if deployed {
let prometheus_crds = [
"prometheuses.monitoring.coreos.com",
"prometheusrules.monitoring.coreos.com",
"servicemonitors.monitoring.coreos.com",
"podmonitors.monitoring.coreos.com",
"alertmanagers.monitoring.coreos.com",
"thanosrulers.monitoring.coreos.com",
];
helm_exec_uninstall_with_chart_info(kubernetes_config, &environment_variables, chart_info)?;
for crd in &prometheus_crds {
kubectl_exec_delete_crd(kubernetes_config, crd, environment_variables.clone())?;
}
}
}
Err(e) => return Err(e),
};
}
HelmAction::Skip => {}
}
Ok(payload)
}
}

Final word

We’ve been using this for production usage at Qovery for more than five months now. From an experienced Kubernetes point of view (+6y of XP on the Kubernetes ecosystem), I finally feel confident on helm chart deployments.

We don’t know if we will move out to a dedicated library. If we receive requests, we’ll consider it.

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

DevOps
 minutes
Best 10 VMware alternatives: the DevOps guide to escaping the "Broadcom Tax"

Facing VMware price hikes after the Broadcom acquisition? Explore the top 10 alternatives - from Proxmox to Qovery, and discover why leading teams are switching from legacy VMs to modern DevOps automation.

Mélanie Dallé
Senior Marketing Manager
DevOps
DevSecOps
 minutes
Zero-friction DevSecOps: get instant compliance and security in your PaaS pipeline

Shifting security left shouldn't slow you down. Discover how to achieve "Zero-Friction DevSecOps" by automating secrets, compliance, and governance directly within your PaaS pipeline.

Mélanie Dallé
Senior Marketing Manager
DevOps
Observability
Heroku
 minutes
Deploy to EKS, AKS, or GKE without writing a single line of YAML

Stop choosing between Heroku's simplicity and Kubernetes' power. Learn how to deploy to EKS, GKE, or AKS with a PaaS-like experience - zero YAML required, full control retained.

Mélanie Dallé
Senior Marketing Manager
DevOps
Platform Engineering
 minutes
GitOps vs. DevOps: how can they work together?

Is it GitOps or DevOps? Stop choosing between them. Learn how DevOps culture and GitOps workflows work together to automate Kubernetes, eliminate drift, and accelerate software delivery.

Mélanie Dallé
Senior Marketing Manager
DevSecOps
Platform Engineering
Internal Developer Platform
 minutes
Cut tool sprawl: automate your tech stack with a unified platform

Stop letting tool sprawl drain your engineering resources. Discover how unified automation platforms eliminate configuration drift, close security gaps, and accelerate delivery by consolidating your fragmented DevOps stack.

Mélanie Dallé
Senior Marketing Manager
DevOps
Developer Experience
 minutes
Top 10 GitHub actions alternatives: stop optimizing for "price per minute"

GitHub’s new self-hosted fees are a wake-up call. But moving to the "cheapest" runner provider is a strategic error. Discover the top alternatives that optimize for Total Cost of Ownership, not just compute costs.

Mélanie Dallé
Senior Marketing Manager
Product
Observability
5
 minutes
Alerting with guided troubleshooting in Qovery Observe

Get alerted and fix issues with full context. Qovery Observe notifies you when something goes wrong and guides you straight to the metrics and signals that explain why, all in one place.

Alessandro Carrano
Head of Product
DevOps
Developer Experience
 minutes
Top 10 Terraform cloud alternatives: the DevOps guide to escaping "State File Nightmares"

Tired of rising licensing costs and complex state management? Discover the top 10 alternatives to modernize your Infrastructure as Code, empower developers, and regain control of your cloud spend.

Mélanie Dallé
Senior Marketing Manager

It’s time to rethink
the way you do DevOps

Turn DevOps into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.