Blog
Engineering
2
minutes

GitHub Variables and Nx Reusable Workflows

At Qovery, we build our frontend using Nx and rely on the official nrwl/ci GitHub Actions. Our frontend requires third-party tokens during compile time, but we would like to avoid hardcoding them or using the .env file to define our tokens. The latter exposes our source code directly on GitHub, and even though it's not sensitive data, we don't want it to be easily scraped. As probably many others, we've faced issues when we dug into environment variables using this reusable workflow: https://github.com/nrwl/ci?tab=readme-ov-file#limited-secrets-support https://github.com/nrwl/ci/issues/92 https://github.com/nrwl/ci/issues/44 So, I wanted to share the lessons I learned from this experience.
September 26, 2025
Camille Tjhoa
Senior Software Engineer
Summary
Twitter icon
linkedin icon

First and foremost, we need to better understand GitHub Actions and its capabilities.

GitHub Secrets vs Variables

Github differentiates secrets from variables.

The main differences that are going to be important here for our use case are:

  • Secrets are for sensitive data and cannot be shared with reusable workflows
  • Variables are non-sensitive data and are shared to reusable workflows through the keyword vars

As mentioned earlier, our data consists of frontend third-party tokens that are not sensitive because they can be found in our public JS source code. Additionally, nx-cloud-main and nx-cloud-agents are reusable workflows.

Therefore, our solution should revolve around Variables.

Note that you can create repository variables in GitHub like this.

Secrets and Variables screen from GitHub

Nrwl/ci reusable workflow configuration

Although GitHub Variables may appear perfect in our case, the reusable workflows from nrwl/ci do not directly utilize the vars keyword internally. Therefore, variables are not used by those workflows as they are.

Upon closer examination of the workflow configuration, we find a parameter called "environment-variables". Unfortunately, this parameter requires environment variables in dotenv format:

NX_MY_TOKEN=1234
NX_MY_OTHER_TOKEN=4567

whereas our vars are an object

{
NX_MY_TOKEN: 1234
NX_MY_OTHER_TOKEN: 4567
}

GitHub script to the rescue

To achieve the desired format, we require some intermediate scripting. Luckily, the actions/github-script is the ideal tool for this task. It enables us to convert GitHub variables into the expected Nx format.

  env-vars:
runs-on: ubuntu-latest
outputs:
variables: ${{ steps.var.outputs.variables }}
steps:
- name: Setting global variables
uses: actions/github-script@v7
id: var
with:
script: |
const varsFromJSON = ${{ toJSON(vars) }}
const variables = []
for (const v in varsFromJSON) {
variables.push(v + '=' + varsFromJSON[v])
}
core.setOutput('variables', variables.join('\\n'))
nx-main:
needs: [env-vars]
name: Nx Cloud - Main Job
uses: nrwl/ci/.github/workflows/[email protected]
with:
environment-variables: |
${{ needs.env-vars.outputs.variables }}
# ...

agents:
needs: [env-vars]
name: Nx Cloud - Agents
uses: nrwl/ci/.github/workflows/[email protected]
with:
number-of-agents: 3
environment-variables: |
${{ needs.env-vars.outputs.variables }}

Conclusion

We have learned more about the capabilities of GitHub Actions and are taking our CI to the next level!

We can leverage our environment variables without exposing them directly by using a bit of scripting and the options provided by Nx workflows.

And the best part is we can achieve this without hardcoding them or using a .env file.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

By adopting GitOps and utilizing management platforms like Qovery, fintech teams can automatically generate DORA-compliant audit trails, transforming regulatory compliance from a manual, time-consuming chore into an automated, native byproduct of their infrastructure.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.