Blog
Engineering
5
minutes

Understanding the Basics of Application Autoscaling

Application autoscaling is a considerable subject. At first, it looks simple because everyone understands the goal and how conceptually it works, but it’s not that simple in practice.
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

Let’s start with a simple schema explaining what type of app auto-scaling exists today:

Horizontal Autoscaling vs. Vertical Autoscaling vs. Multi-dimensional Autoscaling

At Qovery, we’re using horizontal and vertical autoscaling on a daily basis for our production at different levels, and the result is excellent when the tuning is made after days/weeks of statistics, analysis, and configuration.

Graph of Qovery Engine autoscaling triggered

Horizontal autoscaling

Horizontal Autoscaling

Horizontal autoscaling is scaling up an application by adding more app instances to distribute the workload across all instances, allowing for increased capacity and improved performance.

Application autoscaling is different than cluster autoscaling, but it's somehow related. In the context where your app runs on a cluster (E.g., Kubernetes), the more your app scales up the number of instances, the more your app is likely to consume cluster resources, and the more the cluster is likely to scale up nodes.

Horizontal scaling refers to automatically adding or removing instances based on predefined rules or metrics. When the workload increases, additional instances are dynamically provisioned to handle the increased demand. Conversely, excess instances are automatically terminated when the workload decreases to optimize resource utilization and cost.

Horizontal autoscaling offers several benefits, including:

  • Improved performance: By distributing the workload across multiple instances, horizontal scaling can handle increased traffic or resource-intensive tasks more effectively, reducing response times and improving overall performance.
  • Enhanced availability: Additional instances provide redundancy and fault tolerance. If one instance fails or becomes overloaded, the load can be automatically distributed to other instances, ensuring uninterrupted service availability.
  • Scalability: Horizontal scaling allows for seamless expansion of an application or system by adding more instances. This flexibility enables businesses to accommodate sudden surges in traffic or increased demand without impacting performance.
  • Cost optimization: Autoscaling allows you to allocate resources based on actual demand. Scaling up or down based on workload ensures efficient resource utilization, preventing overprovisioning and reducing unnecessary costs.

Vertical autoscaling

Vertical Autoscaling

Vertical autoscaling is a way to make your application more resource-autonomous by upgrading the resources of a single application instead of adding more machines (and scaling horizontally). It's like boosting your computer by increasing its CPU power, memory, storage, or network capacity.

With vertical autoscaling, you can improve your application's performance without the need to manage many instances. It simplifies administration and reduces the complexity of handling a distributed system.

However, there's a maximum limit to how much you can upgrade an instance before hitting hardware constraints. Also, scaling up or down vertically may require restarting or reconfiguring the machine, resulting in temporary downtime or disruption.

Vertical autoscaling is commonly used in traditional setups or when the workload can't be easily distributed across multiple instances. It's handy for applications that require a lot of computational power, memory, or specialized hardware configurations.

Although horizontal autoscaling has gained popularity with cloud computing and containers, vertical autoscaling still plays a role in optimizing the performance and resource utilization of individual instances in specific situations.

Multidimensional autoscaling (Google proprietary)

Multi-dimensional Autoscaling

Multidimensional autoscaling is like having a super-smart system that automatically adjusts the resources of your application or system in multiple dimensions to handle changing demands. It's all about ensuring your application has the right power and capacity when needed.

Think of it as a dynamic team of helpers that can scale up or down in terms of the number of instances and by upgrading or downgrading the resources within each instance. It's like giving your application a turbo boost or dialing it down when the workload changes.

With multidimensional autoscaling, you don't have to adjust resources or add more instances manually. The system takes care of it for you, continuously monitoring metrics like CPU usage, memory, network traffic, or any other custom-defined criteria.

When your application is experiencing high traffic or increased resource demands, multidimensional autoscaling will intelligently add more resources to ensure smooth performance and prevent any slowdowns or crashes. On the other hand, when the workload decreases, it will automatically scale down to optimize resource usage and save costs.

The beauty of multidimensional autoscaling is that it considers multiple factors to make the right decisions. It's like having a super-smart teammate who knows when to boost your application and when to hold back to avoid wasting resources.

By employing multidimensional autoscaling, you can ensure your application stays resilient, responsive, and cost-effective. It's like having a magical elastic system that expands and contracts as needed, effortlessly adapting to the ever-changing demands of your application.

Unfortunately, this feature is exclusive to Google Cloud and unavailable as an open-source project.

Conclusion

Tree solutions exist. The most common autoscaling is definitively horizontal autoscaling. A lot of large companies already use it, and it works well in a lot of situations. Vertical autoscaling is helpful, but limitations restrict a lot of its usage. And multidimensional may be the best, but it requires you to know your application very well when setting limits.

Tests are mandatory to ensure the behavior of the autoscaler is the one expected for your application!

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

DevOps
Developer Experience
 minutes
AWS ECS vs. EKS vs. Elastic Beanstalk: A Comprehensive Guide

Confused about which AWS container service to use? This comprehensive guide compares the trade-offs between simplicity, control, and complexity for ECS, EKS, and Elastic Beanstalk to help you choose the right platform for your application.

Mélanie Dallé
Senior Marketing Manager
DevOps
AWS
7
 minutes
Migrating from ECS to EKS: A Complete Guide

Planning your ECS to EKS migration? Learn the strategic business case (portability, ecosystem access), navigate the step-by-step roadmap, and avoid common pitfalls (networking, resource allocation). Discover how Qovery automates EKS complexity for a seamless transition.

Morgan Perry
Co-founder
DevOps
 minutes
Fargate Simplicity vs. Kubernetes Power: Where Does Your Scaling Company Land?

Is Fargate too simple or Kubernetes too complex for your scale-up? Compare AWS Fargate vs. EKS on cost, control, and complexity. Then, see how Qovery automates Kubernetes, giving you its power without the operational headache or steep learning curve.

Mélanie Dallé
Senior Marketing Manager
DevOps
Cloud Migration
 minutes
FluxCD vs. ArgoCD: Why Qovery is the Better Way to Do GitOps

Dive into the ultimate FluxCD vs. ArgoCD debate! Learn the differences between these top GitOps tools (CLI vs. UI, toolkit vs. platform) and discover a third path: Qovery, the DevOps automation platform that abstracts away Kubernetes complexity, handles infrastructure, and lets you ship code faster.

Mélanie Dallé
Senior Marketing Manager
Qovery
 minutes
Our rebrand: setting a new standard for DevOps automation

Qovery unveils its new brand identity, reinforcing its mission to make DevOps simple, intuitive, and powerful. Discover how our DevOps automation platform simplifies infrastructure, scaling, security, and innovation across the full DevOps lifecycle.

Romaric Philogène
CEO & Co-founder
Qovery
3
 minutes
We've raised $13M Series A to make DevOps so simple, it feels unfair

I'm excited to announce our $13M Series A, led by IRIS and Crane Venture Partners with support from Datadog founders and Speedinvest. This investment will fuel our mission to make DevOps simple and scalable, expand in the US and Europe, and accelerate product innovation.

Romaric Philogène
CEO & Co-founder
Observability
 minutes
Qovery Observe is Here: Your Deployments, Your Data, Your Visibility

Monitor your deployments with Qovery Observe: real-time metrics, logs, and events, directly integrated with your AWS applications and containers.

Julien Dan
Technical Product Manager
DevOps
9
 minutes
Top 10 Fargate Alternatives: Simplify Your Container Deployment

Tired of complex cloud setups? Explore the top 10 Fargate alternatives and discover how Qovery can simplify your container deployment, saving you time and money.

Mélanie Dallé
Senior Marketing Manager

It’s time to rethink
the way you do DevOps

Say goodbye to DevOps overhead. Qovery makes infrastructure effortless, giving you full control without the trouble.