Blog
Kubernetes
AWS
3
minutes

New Feature: Instance Type Filtering in Karpenter

We're excited to announce a significant enhancement to Karpenter, an intelligent Kubernetes autoscaling solution that automatically optimizes your infrastructure costs and performance. Since integrating Karpenter six months ago, it has helped our users automatically provision the right AWS instances for their workloads. Today, we're taking this automation to the next level by introducing Instance Type Filtering, giving you precise control over which AWS instances Karpenter can use when scaling your applications.
September 26, 2025
Romaric Philogène
CEO & Co-founder
Summary
Twitter icon
linkedin icon

What is Karpenter?

For those new to our platform, Karpenter is an open-source node provisioning project that revolutionizes how Kubernetes clusters scale. Unlike traditional autoscalers that work with predefined node groups, Karpenter dynamically provisions exactly the right compute resources based on your workload demands.

Here's what makes Karpenter special:

  • It can launch nodes in seconds when your applications need them
  • It automatically finds the most cost-effective instance types for your workloads
  • It removes nodes when they're no longer needed, helping you save costs
  • It can handle diverse workload requirements, from CPU-intensive tasks to memory-heavy applications

With Karpenter, you no longer need to pre-provision node groups or worry about over/under-provisioning - it handles all of this automatically. Discover more about Karpenter.

Introducing Instance Type Filtering

With our latest update, you can now restrict the types of instances Karpenter uses to deploy your applications. This feature provides two key benefits:

  1. Reduce Node Count: By using larger instances, you can consolidate your workloads onto fewer nodes, simplifying management and potentially reducing costs.
  2. Family-Specific Deployments: You can now limit deployments to specific EC2 instance families, ensuring your applications run on hardware that meets their specific requirements.

See it in action ⬆️

How to Use Instance Type Filtering

The new filtering feature is available in the Cluster settings for all clusters with Karpenter enabled. You can filter instances based on:

  • Architecture: Choose between AMD64 and ARM64
  • Size: From small to 32xlarge instances
  • Categories/Families: Select from various optimized instance families including: Compute Optimized Storage Optimized Accelerated Computing and more
You can filter the type of instances on multiple criteria (CPU arch type, size, categories of instances..)

The visual filter interface makes it easy to select exactly which instance types you want to use, with real-time feedback showing you how many instance types match your criteria.

Coming Soon: Consolidation Scheduling

We're not stopping here! We're already working on the next major feature for Karpenter: Consolidation Scheduling. This upcoming feature will allow you to optimize your NodePool hosting during specific time windows.

Figma Design for Karpenter Consolidation Feature

You'll be able to:

  • Set specific days for consolidation (e.g., Monday, Friday, Saturday, Sunday)
  • Define precise start times for the consolidation process
  • Set the duration of the consolidation window

This feature will be particularly useful for optimizing costs during off-peak hours while maintaining performance during high-traffic periods.

Getting Started

To start using Instance Type Filtering:

  1. Navigate to your cluster's settings
  2. Ensure Karpenter is enabled for your cluster
  3. Look for the "Instance types scope" section
  4. Click "Edit" to access the visual filter interface
  5. Select your desired architectures, sizes, and instance families
  6. Save your changes
Clusters > Select Your Cluster with Karpenter > Resources Tab

Remember to review the IAM permissions warning and ensure your Qovery user has the correct permissions before deploying your cluster with these new settings.

Conclusion

Instance Type Filtering is a powerful addition to Karpenter that gives you more control over your Kubernetes infrastructure while maintaining the simplicity and efficiency that Karpenter is known for. We're excited to see how you'll use this feature to optimize your deployments, and we look forward to bringing you more improvements in the future.

Try out Instance Type Filtering today and let us know what you think! And stay tuned for the upcoming Consolidation Scheduling feature that will bring even more optimization capabilities to your clusters.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

AI
Compliance
 minutes
Agentic AI infrastructure: moving beyond Copilots to autonomous operations

The shift from AI copilots to autonomous agents is redefining infrastructure requirements. Discover how to build secure, stateful, and compliant Agentic AI systems using Kubernetes, sandboxing, and observability while meeting EU AI Act standards

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Effective Kubernetes management in 2026 demands a shift from manual cluster building to intent-based fleet orchestration. By implementing agentic automation on standard EKS, GKE, or AKS clusters, enterprises eliminate operational weight, prevent configuration drift, and proactively control cloud spend without vendor lock-in, enabling effective scaling across massive fleets.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Building a single pane of glass for enterprise Kubernetes fleets

A Kubernetes single pane of glass is a centralized management layer that unifies visibility, access control, cost allocation, and policy enforcement across § cluster in an enterprise fleet for all cloud providers. It replaces the fragmented practice of switching between AWS, GCP, and Azure consoles to govern infrastructure, giving platform teams a single source of truth for multi-cloud Kubernetes operations.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)

Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands. While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.