24,000 Contact Us Cloud

Read Caching in TDengine

Jeff Tao

September 17, 2025 /

In industrial IoT (IIoT) applications, it’s often the case that what’s happening right now is exponentially more important than what happened at any time in the past. Whether it’s monitoring the temperature or pressure of a machine, tracking a vehicle’s location, or seeing meter data in dashboards, delays or stale data can reduce response speed, degrade user experience, and even lead to safety risks. For companies building dashboards, controls, or alerting systems, performance for current-value queries is critical: your system needs to return the latest reading quickly without waiting for disk access or complex processing.

To achieve that speed, many platforms add an external cache layer, like Redis, in front of the database. However, that often means extra complexity and cost: deploying and managing another cluster, ensuring data consistency between cache and storage, and dealing with synchronization delays or discrepancies.

TDengine addresses this challenge directly by building read cache capabilities inside the time-series database. This allows fast access to the latest data with minimal configuration and no external caching system required.

TDengine’s Solution: Built-in Read Cache

TDengine TSDB’s read cache is purpose-built for workloads where the latest value matters most. Instead of forcing every query to scan storage, it keeps recent data in memory so applications can fetch current readings instantly. For dashboards that need to refresh every few seconds, or control systems that trigger alarms the moment a threshold is crossed, this difference translates directly into faster response times and a smoother user experience.

The cache also scales naturally with high-frequency ingestion environments. In industrial deployments, thousands of devices may be reporting new values every second. By serving these requests from memory rather than disk, TDengine TSDB reduces the load on storage subsystems and avoids performance bottlenecks that would otherwise appear at scale.

At the same time, efficiency is not sacrificed. The cache uses a time-driven strategy that prioritizes keeping fresh data in memory while writing older data out to disk in batches. This reduces unnecessary disk I/O and keeps long-term storage lean, while still ensuring recent data is always available at in-memory speed.

Configuring and Using the Read Cache

The read cache in TDengine TSDB is configured at the database level with just a few parameters. The cachemodel setting defines whether TDengine TSDB should cache the last row, the last non-null value for each column, or both. The cachesize setting allocates memory per vnode, giving administrators direct control over performance versus resource use.

Once enabled, queries like SELECT LAST(*) and SELECT LAST_ROW(*) run against cached data, returning results instantly. For applications such as dashboards, monitoring systems, and alerting pipelines, this means users always see the most recent values without delays, while the system continues to scale smoothly as data volume grows.

For detailed information about using the read cache, see the documentation.

  • Jeff Tao

    With over three decades of hands-on experience in software development, Jeff has had the privilege of spearheading numerous ventures and initiatives in the tech realm. His passion for open source, technology, and innovation has been the driving force behind his journey.

    As one of the core developers of TDengine, he is deeply committed to pushing the boundaries of time series data platforms. His mission is crystal clear: to architect a high performance, scalable solution in this space and make it accessible, valuable and affordable for everyone, from individual developers and startups to industry giants.