Blog
Engineering
4
minutes

Optimizing Logs Interfaces: Our Approach with React and Front-End Engineering

In this article, discover how the Qovery engineering team built a high-performance Logs Interface to handle thousands of lines of data efficiently.
September 29, 2025
Rémi Bonnet
Software Engineer
Summary
Twitter icon
linkedin icon

Recently, I dedicated quite some time at Qovery to developing a technically challenging and complex feature: Logs Interface.

In this post, I'll explain how it works and share some design and front-end tips for efficiently rendering heavy resources in React — that also apply to other front-end stacks.

Perspective

We have two types of application logs: Deployment and Running logs.

Both work similarly, though the Running logs contain more complex row structures and include advanced filtering options.

At Qovery, we help technical teams automate infrastructure and simplify deployments. Log analysis is an essential part of that workflow, giving developers clear insights to troubleshoot and improve their applications directly from the platform.

In this context, the main challenges are handling high volumes of logs with thousands of lines per session, managing performance bottlenecks, and ensuring the user experience supports these optimisations.

Rendering Strategy

Too many DOM nodes can cause browser freezes

The page can contain more than 10.000 lines — rendering them all at once can seriously impact performance. Here are two possible solutions:

  • Virtualization
    This approach renders only the elements visible in the viewport. However, it struggles with variable-height content like multi-line logs. Libraries like virtua or @tanstack/virtual are commonly used.
  • CSS content-visibility
    A property that lets the browser skip rendering elements until they are visible. It's performant but not ideal for long lists since everything stays in the DOM. You can learn more about it in this article.

Both approaches have limitations with variable-height content in our case, so we developed our own approach using these rendering optimizations:

  • Debounced rendering with WebSockets
    Logs arrive one by one through a WebSocket. Instead of rendering every new log immediately, we update the DOM once per second. This reduces the number of renders.
  • Render only a subset of logs initially
    On first load, we only render the last 500 logs. This avoids performance issues caused by rendering too many DOM nodes. Users can still load more logs if needed.

Using this approach, we achieve:

  • 108ms (optimized) vs 1084ms (unoptimized)
  • That's a 90% reduction in rendering time

Memory usage and responsiveness see similar improvements.
You can try the demo here and use the performance console tool to see the difference.

Keep row elements simple and minimize redundant DOM elements

Try to keep each row as simple as possible by avoiding unnecessary wrappers or deeply nested elements. This helps reduce DOM size and improves rendering performance.

For example, here's how we optimized icon rendering in each row:

  • Use the use element to reference a shared SVG
    This prevents repeating the same markup and reduces DOM complexity. See the example below.
  • Use font icons
    They are lightweight and easy to style across the app.

<svg id="logs-icon" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#fff" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
  <path d="M13 12h8" />
  <path d="M13 18h8" />
  <path d="M13 6h8" />
  <path d="M3 12h1" />
  <path d="M3 18h1" />
  <path d="M3 6h1" />
  <path d="M8 12h1" />
  <path d="M8 18h1" />
  <path d="M8 6h1" />
</svg>

<svg width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#fff" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
  <use href="#logs-icon" />
</svg>

Better log navigation with simple UI indicators

To help users follow what's happening in the logs, we added small visual cues, similar to what you'd see in messaging apps like Slack.

  • New logs indicator
    When the user scrolls inside the log list, we pause the rendering of new logs. A button appears at the bottom to let them know that new logs are available and can be resumed when clicked.
null
  • Previous logs indicator
    Since we only render the last 500 logs by default, a button appears at the top of the list when the user scrolls up, allowing them to load previous logs if needed.
null
  • Local status indicator
    We show whether logs are loading, streaming, paused, or finished. So users always know what's going on.
null

These visual indicators, combined with our rendering optimizations, create a comprehensive solution that balances performance with usability.

Conclusion

These techniques have significantly improved our Logs Interface performance. The biggest wins came from design choices - deciding what not to render, rather than optimizing how we render everything.

We have some other optimizations in the pipeline, but this is a solid foundation.

Thanks for reading, and shoutout to my Qovery teammates who built this with me.
Don't hesitate to reach out if you have questions or want to see more about our product!

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

By adopting GitOps and utilizing management platforms like Qovery, fintech teams can automatically generate DORA-compliant audit trails, transforming regulatory compliance from a manual, time-consuming chore into an automated, native byproduct of their infrastructure.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.