Blog
Engineering
5
minutes

Understanding the Basics of Application Autoscaling

Application autoscaling is a considerable subject. At first, it looks simple because everyone understands the goal and how conceptually it works, but it’s not that simple in practice.
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

Let’s start with a simple schema explaining what type of app auto-scaling exists today:

Horizontal Autoscaling vs. Vertical Autoscaling vs. Multi-dimensional Autoscaling

At Qovery, we’re using horizontal and vertical autoscaling on a daily basis for our production at different levels, and the result is excellent when the tuning is made after days/weeks of statistics, analysis, and configuration.

Graph of Qovery Engine autoscaling triggered

Horizontal autoscaling

Horizontal Autoscaling

Horizontal autoscaling is scaling up an application by adding more app instances to distribute the workload across all instances, allowing for increased capacity and improved performance.

Application autoscaling is different than cluster autoscaling, but it's somehow related. In the context where your app runs on a cluster (E.g., Kubernetes), the more your app scales up the number of instances, the more your app is likely to consume cluster resources, and the more the cluster is likely to scale up nodes.

Horizontal scaling refers to automatically adding or removing instances based on predefined rules or metrics. When the workload increases, additional instances are dynamically provisioned to handle the increased demand. Conversely, excess instances are automatically terminated when the workload decreases to optimize resource utilization and cost.

Horizontal autoscaling offers several benefits, including:

  • Improved performance: By distributing the workload across multiple instances, horizontal scaling can handle increased traffic or resource-intensive tasks more effectively, reducing response times and improving overall performance.
  • Enhanced availability: Additional instances provide redundancy and fault tolerance. If one instance fails or becomes overloaded, the load can be automatically distributed to other instances, ensuring uninterrupted service availability.
  • Scalability: Horizontal scaling allows for seamless expansion of an application or system by adding more instances. This flexibility enables businesses to accommodate sudden surges in traffic or increased demand without impacting performance.
  • Cost optimization: Autoscaling allows you to allocate resources based on actual demand. Scaling up or down based on workload ensures efficient resource utilization, preventing overprovisioning and reducing unnecessary costs.

Vertical autoscaling

Vertical Autoscaling

Vertical autoscaling is a way to make your application more resource-autonomous by upgrading the resources of a single application instead of adding more machines (and scaling horizontally). It's like boosting your computer by increasing its CPU power, memory, storage, or network capacity.

With vertical autoscaling, you can improve your application's performance without the need to manage many instances. It simplifies administration and reduces the complexity of handling a distributed system.

However, there's a maximum limit to how much you can upgrade an instance before hitting hardware constraints. Also, scaling up or down vertically may require restarting or reconfiguring the machine, resulting in temporary downtime or disruption.

Vertical autoscaling is commonly used in traditional setups or when the workload can't be easily distributed across multiple instances. It's handy for applications that require a lot of computational power, memory, or specialized hardware configurations.

Although horizontal autoscaling has gained popularity with cloud computing and containers, vertical autoscaling still plays a role in optimizing the performance and resource utilization of individual instances in specific situations.

Multidimensional autoscaling (Google proprietary)

Multi-dimensional Autoscaling

Multidimensional autoscaling is like having a super-smart system that automatically adjusts the resources of your application or system in multiple dimensions to handle changing demands. It's all about ensuring your application has the right power and capacity when needed.

Think of it as a dynamic team of helpers that can scale up or down in terms of the number of instances and by upgrading or downgrading the resources within each instance. It's like giving your application a turbo boost or dialing it down when the workload changes.

With multidimensional autoscaling, you don't have to adjust resources or add more instances manually. The system takes care of it for you, continuously monitoring metrics like CPU usage, memory, network traffic, or any other custom-defined criteria.

When your application is experiencing high traffic or increased resource demands, multidimensional autoscaling will intelligently add more resources to ensure smooth performance and prevent any slowdowns or crashes. On the other hand, when the workload decreases, it will automatically scale down to optimize resource usage and save costs.

The beauty of multidimensional autoscaling is that it considers multiple factors to make the right decisions. It's like having a super-smart teammate who knows when to boost your application and when to hold back to avoid wasting resources.

By employing multidimensional autoscaling, you can ensure your application stays resilient, responsive, and cost-effective. It's like having a magical elastic system that expands and contracts as needed, effortlessly adapting to the ever-changing demands of your application.

Unfortunately, this feature is exclusive to Google Cloud and unavailable as an open-source project.

Conclusion

Tree solutions exist. The most common autoscaling is definitively horizontal autoscaling. A lot of large companies already use it, and it works well in a lot of situations. Vertical autoscaling is helpful, but limitations restrict a lot of its usage. And multidimensional may be the best, but it requires you to know your application very well when setting limits.

Tests are mandatory to ensure the behavior of the autoscaler is the one expected for your application!

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

Kubernetes
3
 minutes
NGINX Ingress Controller End of Maintenance by March 2026

Kubernetes NGINX ingress maintainers have announced that the project will move into end-of-life mode and stop being actively maintained by March 2026. Parts of the NGINX Kubernetes ecosystem are already deprecated or archived.

Romaric Philogène
CEO & Co-founder
DevOps
 minutes
The 10 Best Octopus Deploy Alternatives for Modern DevOps

Explore the top 10 Octopus Deploy alternatives for modern DevOps. Find the best GitOps and cloud-native Kubernetes delivery platforms.

Mélanie Dallé
Senior Marketing Manager
AWS
Cloud
Business
8
 minutes
6 Best AWS Deployment Options to Consider

Deploying on AWS efficiently is key. See the updated guide on the best AWS deployment options, covering new features and services.

Morgan Perry
Co-founder
Cloud
Kubernetes
 minutes
The High Cost of Vendor Lock-In in Cloud Computing and How to Avoid it

Cloud vendor lock-in threatens agility and raises costs. Discover the high price of proprietary services, egress fees, and technical entrenchment, plus the strategic roadmap to escape. Learn how embracing open standards, Kubernetes, and an exit strategy from day one ensures long-term flexibility and control.

Mélanie Dallé
Senior Marketing Manager
DevOps
 minutes
The Top 10 Porter Alternatives: Finding a More Flexible DevOps Platform

Looking for a Porter alternative? Discover why Qovery stands out as the #1 choice. Compare features, pros, and cons of the top 10 platforms to simplify your deployment strategy and empower your team.

Mélanie Dallé
Senior Marketing Manager
AWS
Deployment
 minutes
AWS App Runner Alternatives: Top 10 Choices for Effortless Container Deployment

AWS App Runner limits control and locks you into AWS. See the top 10 alternatives, including Qovery, to gain crucial customization, cost efficiency, and multi-cloud flexibility for containerized application deployment.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Kubernetes Management: Best Practices & Tools for Managing Clusters and Optimizing Costs

Master Kubernetes management and cut costs with essential best practices and tools. Learn about security, reliability, autoscaling, GitOps, and FinOps to simplify cluster operations and optimize cloud spending.

Mélanie Dallé
Senior Marketing Manager
AWS
GCP
Azure
Cloud
Business
10
 minutes
10 Best AWS Elastic Beanstalk Alternatives

AWS Elastic Beanstalk is often rigid and slow. This guide details the top 10 Elastic Beanstalk alternatives—including Heroku, Azure App Service, and Qovery—comparing the pros, cons, and ideal use cases for achieving superior flexibility, faster deployments, and better cost control.

Morgan Perry
Co-founder

It’s time to rethink
the way you do DevOps

Say goodbye to DevOps overhead. Qovery makes infrastructure effortless, giving you full control without the trouble.