A cloud-native time-series database (TSDB) takes full advantage of cloud technology and distributed systems in processing time-series data. With a cloud-native time-series database, you can quickly spin up infrastructure to prototype, develop, test, and deliver new applications and features, shortening the time to market while reducing costs through flexible payment models.
For a deeper dive into the benefits of cloud native design for the storage and processing of time-series data, download our white paper Cloud Native: An Essential Element of the Time Series Database.
TDengine provides the six elements – distributed design, scalability, elasticity, resiliency, observability, and automation – required of a true cloud-native time-series database.
Distributed Design
The logical structure of a TDengine cluster is shown in the following figure.

The data nodes (dnodes) form a TDengine cluster which interacts via the TDengine Client (TAOSC) with time-series applications. Each logical unit is described as follows:
- Data node (dnode): A dnode is a running instance of the TDengine server on a physical server, virtual machine, or container. A working system must have at least one dnode. Virtual nodes, (vnodes), query nodes (qnodes), and management nodes (mnodes) run on dnodes.
- Virtual node (vnode): The vnode is the storage component of a TDEngine cluster. It is a shard that contains the time-series data and metadata for a certain number of tables and an independent work unit with its own running threads, memory space, and persistent storage. This design allows better data sharding and load balancing while preventing data from overheating or skewing.
- Query node (qnode): A qnode is the compute component of a TDengine cluster. When you run a query, a qnode retrieves the required data from a vnode and performs compute operations, potentially sending the results to another qnode for further computing in cases such as merges. An mnode determines when to start or stop qnodes based on the current workload.
- Management node (mnode): An mnode is the management component of a TDengine cluster, responsible for monitoring and maintaining the running status of all data nodes and performing load balancing among nodes. The mnode is also responsible for the storage and management of metadata, including users, databases, and dnodes, but not tables.
- TDengine Client (TAOSC): The TDengine Client handles interactions between applications and the cluster and provides connection libraries for the programming languages that TDengine supports. The TDengine Client provides a layer of abstraction between applications and the cluster so that applications do not connect directly to dnodes.
Scalability
The first essential element required for a cloud-native time-series database is scalability. To achieve scalability for massive data sets, TDengine shards data by data collection point and partitions data by time.

Sharding: All data from a single data collection point is stored in only one vnode, though each vnode can store the data of multiple data collection points. The vnode stores the time-series data and also the metadata, such as tags and schema, for the data collection point. Each data collection point is assigned to a specific vnode by consistent hashing, thereby distributing the data from multiple data collection points across different nodes.
Partitioning: In addition to sharding the data, TDengine also divides it by time period. The data for each time period is stored together, and the data of different time periods cannot overlap. This time period is a configurable value of one or multiple days.
With data partitioning, TDengine can locate files involved in your queries based on the time period, providing significant speed improvement for queries. Partitioning data by time also enables efficient implementation of data retention policies. Instead of searching the database for expired data, TDengine can simply delete any files corresponding to time periods that have exceeded the retention policy. This design also makes it much easier to achieve multi-level storage and reduce your storage costs further.
High Cardinality: The metadata for each data collection point is stored on the vnode assigned to the data collection point instead of a centralized location. When an application inserts data points to a specific table or queries data on a specific table, based on the hash result, the request will be passed to the vnode directly. There is no central point, so there is no bottleneck. For aggregation on multiple tables, the query request will be sent to the corresponding vnodes first, the vnodes will execute the aggregation operation, then taosc or qnode will aggregate the query results from multiple vnodes for a second time.
To gain more processing power, you just need to add more nodes into the cluster. The horizontal scalability is easily achieved by TDengine. Through testing, we can tell TDengine can support 10 billion data collection points and 100 dnodes. The high cardinality problem is gone.
Elasticity
The second essential element for a a cloud-native time-series database is elasticity. Not only does TDengine provide the ability to scale up, but it also provides the ability to scale down.
To support storage elasticity, TDengine may split a vnode into two vnodes if data insert latency is reaching a threshold, so more system resources will be allocated for data ingestion. On the other hand, TDengine may combine multiple vnodes into a single vnode to save resources as long as latency and performance can be guaranteed.
For compute elasticity, TDengine introduced qnode into the system. For a simple query like fetching raw data or rollup data, the associated vnode will do everything. But for a query which requires sorting, grouping or other extensive compute operations, one or even more qnodes will be used in the execution. In the deployment, a qnode may run in a container, and it can be started or stopped dynamically based on the system workload.
By using qnodes, TDengine is an ideal data analytics platform for time-series data, including real time analytics and batch processing, because the compute resources are nearly infinite and elastic in a cloud environment.
Resilience
The third essential element for a cloud-native time-series database is resilience. TDengine’s resilience is achieved by its high reliability and high availability design.
For database, storage reliability is the top priority. TDengine adopts the traditional way, WAL (Write Ahead Log), to guarantee the data can be recovered even if a node crashes. The incoming data point is always written into WAL first before it sends acknowledgement to application. The data writing process is like:

TDengine provides high availability of the system through data replication for both vnode and mnode. RAFT is used for the consensus algorithm.
Vnodes on different nodes can form a virtual node group. The data in this virtual node group is synchronized through RAFT algorithm to ensure the consistency of data in this virtual node group. Data writes can only be performed on the leader node, but queries can be performed on both the leader and followers simultaneously. If the leader node fails, the system automatically selects a new leader node and continues to provide services and ensures high availability.
For mnode, to provide high availability, three mnodes in a cluster will be configured on three different nodes. These three mnodes keep the data consistency via RAFT algorithm.
Strong data consistency for metadata is guaranteed in TDengine’s design. But for time-series data, for better performance, the eventual consistency is applied instead.
Observability
The fourth essential element for a cloud-native time-series database is observability. TDengine collects all kinds of metrics to monitor the system status, like CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries and more. It provides a Grafana dashboard TDinsight for visualization and alerting. For more about TDinsight, please check the documentation.

TDengine also provides a component named ‘taoskeeper’. The taosKeeper is able to send the metrics to other monitoring tools like Prometheus, thus TDengine can be integrated with existing observability system.
Automation
The fifth essential element for a cloud-native time-series database is automation. TDengine can be installed with binary or Docker image. The TDengine cluster can be deployed in Kubernetes environment by by `kubectl` command or helm chart as Kubernetes standard operation procedure. Node can be added into a cluster or removed from a cluster by `kubectl` or `heml` command utility too. All the management can be codified to make operation and maintenance easier.
Summary
Through its native distributed design, sharding and partitioning, separation of compute and storage, RAFT for data replication consistency and more, TDengine provides the scalability, elasticity and resilience for time-series data processing. By supporting the container, kubernetes deployment, comprehensive monitoring metrics and automation scripts, TDengine can be deployed and run on public, private or hybrid cloud to take full advantage of cloud platform. TDengine is a cloud-native time-series database, not a cloud ready database.
Learn more about TDengine: