Blog
Product
5
minutes

3 ways of cloning an application and a database per git branch

Back in the early days of software development, having multiple developers working on the same application was a tough challenge. That’s why VCS (Version Control System) like Git was created and methodology like Feature Branching was introduced.
September 26, 2025
Romaric Philogène
CEO & Co-founder
Summary
Twitter icon
linkedin icon

1 feature = 1 branch

The basic idea of working per git branch (also known as Feature Branching) is that when you start to work on a feature, you take a branch of your repository (e.g., git) to work on that feature.

null

The advantage of feature branching is that each developer can work on their feature and be isolated from changes going on elsewhere. This concept has been initiated for stateless application. However, most applications rely on databases (also known as “stateful application”).

The question is: How to use the concept of Feature Branch with a stateful application?

Let’s take a concrete example:

We have a NodeJS application that connects to a PostgreSQL database, for which we have 3 distinct branches: master, staging, and feature_1.

Applications from different branches connected to the same database

No matter which branch we are working on, we are always connected to the same database. If our application writes, modifies, or deletes data in the database, while we are on the "feature_1" branch, all branches will also be impacted by these changes. Meaning, we violate the main Feature Branch principle - isolation. There is nothing more stressful than having to keep in mind that we can lose data and break everything at any time.

so -

How to be compliant with the feature branch principle (isolation) with a database?

One possible solution is to have one copy of the database per branch. Each database can be modified without the risk of modifying the others.

Branches have their own database
How to have one database per branch?

3 main choices are possible (manual, partially automated, and fully automated) with all their advantages and disadvantages.

Manual

Manual means that you need to install all your required services (DNS, databases, VPC...) manually. This approach is fast enough to bootstrap, but it is hard to maintain over time.

Advantages

  • Fast to bootstrap

Disadvantages

  • You need to configure the system and network services (DNS, network, security...)
  • You need to create the databases according to the number of branches manually.
  • You need to synchronize the data between the databases manually
  • You need to setup observability, monitoring, and alerting
  • Hard to maintain over time
  • Error-prone

Partially automated

Partially automated means that you will spend time setting up a complete system that provides your required services for your project. This type of architecture needed time and effort from experienced DevOps. It's a good choice for large corporations that can support the cost, but most of the time a really bad for smaller ones.

Advantages

  • Automatic system and network configuration + database provisioning with tools like Terraform (Infrastructure as Code)
  • Automatic data synchronization between databases (with a custom script)
  • Perfectly fit your need

Disadvantages

  • You need to manually create the databases according to the number of branches
  • Required months to fully set up
  • Expensive maintenance over time (experienced DevOps engineer required)
  • You need to setup observability, monitoring, and alerting

Fully automated (with Qovery)

Fully automated means that all the required resources by the developer will be deployed no matter his needs. With Qovery, all resources are automatically provided and the developer doesn't even have to change their habits to deploy their application. Feature Branching is supported out of the box.

Qovery gives to any developer the power to clone an application and a database without having to change their habits

Let's take the example of our 3 branches with a NodeJS application and a PostgreSQL database

1 branch = 1 isolated environment

Here are the commands necessary to have a fully isolated application and database on each branch

$ pwd
~/my-nodejs-project

# Github, Bitbucket, Gitlab seamless authentication
$ qovery auth
Opening your browser, waiting for your authentication...
Authentication successful!

# Wizard to generate .qovery.yml
$ qovery init

$ git add .qovery.yml
$ git commit -m "add .qovery.yml file"

# Deploy master environment
$ git push -u origin master

# Show master environment information
$ qovery status

# Create branch staging
$ git checkout -b staging
# Deploy staging environment!
$ git push -u origin staging

# Show staging environment information
$ qovery status

# Create branch feature_1
$ git checkout -b feature_1
# Deploy feature_1 environment!
$ git push -u origin feature_1

# Show feature_1 environment information
$ qovery status

Advantages

  • Accessible to any developer
  • No setup time
  • Programming language agnostic
  • Integrated to git (no other dependencies required)
  • Compliant to Feature Branching concept

Disadvantages

  • Know how to create a Dockerfile
  • Deployment only available on AWS, GCP, and Azure
  • Only integrated to Github, Gitlab, and Bitbucket

Conclusion

In this article, we have seen that the purpose of the Feature Branch is to be able to develop a feature without being impacted by changes that can be made to other branches. However, the Feature Branch concept is difficult to apply when our application needs to access a database. Because each application (coming from several branches) has access to the same database. This is contrary to the Feature Branch isolation principle and brings serious data safety problems.

Qovery allows applications and databases to be seamlessly duplicated from one branch to another. And thus to respect the isolation principles of the Feature Branch.

Try Qovery now!

Useful links: Feature Branching and Continuous Integration from Martin Fowler - Stateless vs Stateful from StackOverflow

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
 minutes
Stopping Kubernetes cloud waste: agentic automation for enterprise fleets

Agentic Kubernetes resource reclamation is the practice of using an autonomous control plane to continuously identify, suspend, and delete idle infrastructure across a multi-cloud Kubernetes fleet. It replaces manual cleanup and reactive autoscaling with intent-based policies that act on business state, eliminating the configuration drift and cloud waste typical of unmanaged fleets.

Mélanie Dallé
Senior Marketing Manager
Platform Engineering
Kubernetes
DevOps
10
 minutes
Kubernetes: the enterprise guide to fleet management at scale

Kubernetes is an open-source platform that automates the deployment, scaling, and management of containerized applications. While originally designed to orchestrate single-cluster workloads, modern enterprise use cases require managing Kubernetes at fleet scale, coordinating thousands of clusters across multi-cloud environments to enforce cost governance, security policies, and automated lifecycle management.

Morgan Perry
Co-founder
AI
Compliance
 minutes
Agentic AI infrastructure: moving beyond Copilots to autonomous operations

The shift from AI copilots to autonomous agents is redefining infrastructure requirements. Discover how to build secure, stateful, and compliant Agentic AI systems using Kubernetes, sandboxing, and observability while meeting EU AI Act standards

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Effective Kubernetes management in 2026 demands a shift from manual cluster building to intent-based fleet orchestration. By implementing agentic automation on standard EKS, GKE, or AKS clusters, enterprises eliminate operational weight, prevent configuration drift, and proactively control cloud spend without vendor lock-in, enabling effective scaling across massive fleets.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Building a single pane of glass for enterprise Kubernetes fleets

A Kubernetes single pane of glass is a centralized management layer that unifies visibility, access control, cost allocation, and policy enforcement across § cluster in an enterprise fleet for all cloud providers. It replaces the fragmented practice of switching between AWS, GCP, and Azure consoles to govern infrastructure, giving platform teams a single source of truth for multi-cloud Kubernetes operations.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)

Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands. While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.