TSBS IoT Performance Report: TDengine, InfluxDB, and TimescaleDB

Jim Fan

July 25, 2025 /

IoT devices — from the smart devices in your home to the equipment in a modern power plant — are continuously collecting and transmitting information. It’s no surprise, then, that IoT datasets are significantly larger and pose different challenges than traditional datasets. Because IoT data is generated in real time, with devices continuously sending updates, sensor readings, and events at a rapid pace, managing and processing this data presents new challenges to data infrastructure.

Deploying a purpose-built time-series database is a solution that has become popular among enterprise customers in recent years as a way to improve the efficiency of their data infrastructure. Considering the enormous scale of time-series datasets, high performance is a key metric for selecting a time-series database management system.

In particular, data ingestion rates must be high enough to handle the data being generated by IoT devices, and although each data point may be small in size, the sheer number of points generated within a given period can be difficult for some database management systems to ingest. Query latency is also an important factor, especially for real-time analytics; a DBMS must be able to return the results of queries fast enough for the visualization, reporting, and analytics components of the system. Finally, performance has a direct impact on total cost of operation, because systems with lower performance require additional hardware resources to provide acceptable results.

Executive Summary

Time Series Benchmark Suite

To assess the performance of TDengine OSS, an open-source, cloud-native time-series database, a performance evaluation was conducted based on the Time Series Benchmark Suite (TSBS), using the five standard scenarios in the TSBS IoT use case on Amazon Web Services (AWS) instances. In addition, the same evaluation was performed under identical conditions on two other leading time-series database solutions, InfluxDB and TimescaleDB, to compare the performance of the three products and assist enterprises in determining the most appropriate time-series database management system for their business scenarios.

Ingestion Performance

In all five IoT scenarios, the ingestion performance of TDengine exceeded that of TimescaleDB and InfluxDB.

  • TDengine ingested the TSBS data between 1.04 times (Scenario 4) and 3.3 times (Scenario 1) faster than TimescaleDB. Moreover, expanding the number of records per device in Scenario 4 from 18 to 576 and configuring TDengine with 24 vgroups increased its ingestion rate to 7 times that of TimescaleDB.
  • Compared with InfluxDB, the ingestion performance of TDengine was between 1.82 (Scenario 3) and 16.2 (Scenario 5) times higher.

Additionally, during the evaluation, TDengine used the fewest CPU resources and had the lowest disk I/O overhead.

Storage Space

TimescaleDB used much more storage space than InfluxDB or TDengine in all five scenarios, and the difference increased with the size of the dataset. In Scenario 4, TimescaleDB required 11.6 times the storage space of TDengine, and 12.2 times in Scenario 5. In the first three scenarios, InfluxDB and TDengine used a similar amount of storage space, but in Scenarios 4 and 5, InfluxDB required 2.6 and 2.8 times more space, respectively.

Query Performance

In Scenario 1 (with four days of data) and Scenario 2, TDengine responded more quickly than InfluxDB and Timescale in all 12 categories of queries in the TSBS IoT use case, with its advantages particularly apparent in more complex queries. In addition, TDengine used fewer overall compute resources than InfluxDB or TimescaleDB to process these queries. The response time of TDengine in Scenario 1 ranged from 1.1 to 16.4 times that of TimescaleDB (5.1x on average) and in Scenario 2 from 1.02 to 87 times that of TimescaleDB (23.3x on average). Compared with InfluxDB, the response time of TDengine in Scenario 1 ranged from 2.4 to 155.9 times faster (21.2x on average) and in Scenario 2 from 6.3 to 426.3 times faster (68.7x on average).

Methodology

TSBS: Framework for Performance Comparison

Time Series Benchmark Suite (TSBS) is an open-source performance testing platform for time-series data. The TSBS framework includes data generation and ingestion, query processing, and automated result aggregation for IoT and DevOps use cases. It has been used by a number of time-series database providers, including InfluxDB, TimescaleDB, QuestDB, and ClickHouse, as a benchmarking tool for performance testing. For other examples of TSBS evaluations, see the following:

TSBS is widely recognized among industry leaders as a standard for benchmarking time-series databases. In this report, the TSBS framework was used to assess the performance of TDengine along with TimescaleDB and InfluxDB in an objective, accurate, and verifiable manner. For more information about TSBS, see the \href{https://github.com/timescale/tsbs}{official GitHub repository}.

TSBS Use Case

In this evaluation, the IoT use case was selected. This use case simulates the data generated by a group of trucks operated by a logistics company. The diagnostics data for these trucks includes three metrics, one nanosecond-level timestamp, and eight tags. The readings data for the trucks includes seven metrics, one nanosecond-level timestamp, and eight tags. One record is generated every 10 seconds, and the time-series data for the trucks may include out-of-order or missing data.

Sample diagnostics data point
Sample readings data point

The complete performance evaluation includes the five scenarios described in the following table. Owing to the nature of the IoT dataset, the records per vehicle is an average.

Scenarios

Note the short duration of Scenario 4 and Scenario 5 due to the large number of devices.

Data Model

In TimescaleDB and InfluxDB, the TSBS framework can automatically create the data model and generate appropriately formatted data. For this reason, this section discusses only data modeling in TDengine.

In TDengine, a supertable is created for each type of device, and an independent subtable for each individual device is created within the supertable. The data for each device is stored in its subtable, and the device set is managed through the supertable. For the TSBS IoT dataset used in this evaluation, one supertable was created for readings and another for diagnostics. Then one subtable was created in each supertable for each vehicle. The value of the name tag for each truck is also used as the name of the subtable, with the prefix d for the diagnostics supertable and r for the readings supertable.

The following SQL statements were used to create the readings and diagnostics supertables with 8 tags and with 7 and 3 metrics, respectively:

CREATE STABLE readings (ts TIMESTAMP, latitude DOUBLE, longitude DOUBLE, elevation DOUBLE, velocity DOUBLE, heading DOUBLE, grade DOUBLE, fuel_consumption DOUBLE) TAGS (name VARCHAR(30), fleet VARCHAR(30), driver VARCHAR(30), model VARCHAR(30), device_version VARCHAR(30), load_capacity DOUBLE, fuel_capacity DOUBLE, nominal_fuel_consumption DOUBLE);
	
CREATE STABLE diagnostics (ts TIMESTAMP, fuel_state DOUBLE, current_load DOUBLE, status BIGINT) TAGS (name VARCHAR(30), fleet VARCHAR(30), driver VARCHAR(30), model VARCHAR(30), device_version VARCHAR(30), load_capacity DOUBLE, fuel_capacity DOUBLE, nominal_fuel_consumption DOUBLE);

After the supertables were created, the following statements were used to create the subtable for the readings for truck 1 (r_truck_1) and the subtable for the diagnostics for truck 1 (d_truck_1):

CREATE TABLE r_truck_1 USING readings TAGS ("truck_1", "South", "Albert", "F-150", "v1.5", 2000, 200, 15);
	
CREATE TABLE d_truck_1 USING diagnostics TAGS ("truck_96", "East", "Albert", "G-2000", "v1.5", 5000, 300, 19);

The subtables for all trucks in each scenario were created by the test script using a similar statement. During this process, it was found that TSBS generates some data with a null value for the name of the truck. The d_truck_null and r_truck_null subtables were created to store this data.

Software Versions and Configurations

This section describes the version and configuration of each product used in the evaluation.

TDengine

TDengine 3.0 was cloned from the official GitHub repository. The specific version downloaded was gitinfo 1bea5a53c27e18d19688f4d38596413272484900. The source code in the cloned repository was compiled and installed as follows:

cmake .. -DDISABLE_ASSERT=true -DSIMD_SUPPORT=true -DCMAKE_BUILD_TYPE=Release -DBUILD_TOOLS=false
make -j && make install

The following six query-related parameters were configured:

  • The numOfVnodeFetchThreads parameter was set to 4, specifying four fetch threads on each vnode.
  • The queryRspPolicy parameter was set to 1, enabling the fast return mechanism for queries.
  • The compressMsgSize parameter was set to 128,000, automatically compressing messages exceeding 128,000 bytes on the transport layer.
  • The SIMB-builtins parameter was set to 1, enabling built-in FMA/AVX/AVX2 hardware acceleration if supported by the CPU.
  • The tagFilterCache parameter was set to 1, enabling the filter cache for tag columns.
  • The numOfTaskQueueThreads was set to 24, specifying 24 threads for the task queue.

In the IoT use case, the number of subtables created in TDengine is twice the number of trucks in the dataset. Twelve vnodes were created in TDengine to support the database, and the tables created during testing were randomly assigned to the twelve vnodes based on the table name. LRU caching was enabled in last_row mode. In Scenario 1 and Scenario 2, the stt_trigger parameter was set to 1, causing TDengine to create one sorted time-series table (STT) file. Data smaller than the minimum rows for the target table was first written to the STT file; when the file was no longer able to contain additional data, its contents were ordered and written to the TDengine data file. In Scenario 3, the stt_trigger parameter was set to 8, and in Scenarios 4 and 5 the parameter was set to 16. Additional STT files improve ingestion performance when a dataset has a large number of tables and low write frequency.

TimescaleDB

TimescaleDB version 2.10.1 was selected for this evaluation. For optimal performance in TimescaleDB, it is necessary to configure different chunk parameters for each scenario. The configurations used are described in the following table.

TimescaleDB chunk configuration

These take into account the recommended configurations from Timescale’s performance evaluation to maximize ingestion performance metrics.

InfluxDB

InfluxDB version 1.8.10 was selected for this evaluation. The newer InfluxDB 2.x was not tested in this report because it is not supported by the main branch of the TSBS framework. Therefore the latest version of InfluxDB that TSBS supports was selected.

The method proposed in Timescale’s performance evaluation was used to configure InfluxDB. In addition, the following configurations were added:

cache-max-memory-size = "80g" 
max-values-per-tag = 0 
index-version = "tsi1" 
compact-full-write-cold-duration = "30s"

This enables the TSI, configures an 80 GB cache, and begins compression 30 seconds after ingestion.

Test Procedure

Hardware Preparation

In order to create an environment similar to that described in Timescale’s performance evaluation, this evaluation was run on r4.8xlarge EC2 instances in Amazon Web Services (AWS). The client and server were each deployed on a separate node in the environment. These nodes were interconnected through a 10 Gbit/s network connection and had the following specifications:

  • CPU: Intel Xeon CPU E5-2686 v4 @ 2.30GHz (32 vCPU)
  • Memory: 244 GB
  • Disk: 800 GB SSD, 3000 IOPS, 125 MB/s throughput

Node Preparation

The server and client nodes were configured as follows:

  • OS: Linux tv5931 5.15.0-1028-aws #32~20.04.1-Ubuntu SMP Mon Jan 9 18:02:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  • gcc: version 9.4.0 (Ubuntu 9.4.0-1ubuntu1\~20.04)
  • Environment: Go 1.16.9, Python 3.8, and pip 20.0.2
  • Dependencies: gcc, cmake, build-essential, git, and libssl-dev

Note that the items in the environment and dependencies categories are installed automatically by the test script.

Running the Test Script

To enhance the reproducibility of this evaluation and simplify the process of downloading, installing, configuring, and starting the various database management systems, a script file was created to automate the test procedure. This script requires Ubuntu 20.04.

The client and server were first configured as follows:

  • Passwordless SSH access was configured on the client and server.
  • All ports were opened between the client and server.

The following items were performed with root privileges to obtain the test script:

On the client node, the repository was cloned to the test directory /usr/local/src:

cd /usr/local/src/
apt install git
git clone https://github.com/taosdata/tsbs.git
cd tsbs/scripts/tsdbComp

In the test.ini file, the IP addresses and hostnames of the server and client nodes were configured. The caseType parameter was set to iot.

clientIP="192.168.0.203"
clientHost="trd03"
serverIP="192.168.0.204"
serverHost="trd04"
caseType="iot"

The script was initiated by running the following command:

nohup bash tsdbComparison.sh > test.log &

The test script automatically installed TDengine, InfluxDB, and TimescaleDB and performed all tests in the selected TSBS use case. In the hardware environment described in this evaluation, the test suite required approximately three days to finish. After the tests were completed, reports were generated in CSV format and saved to the /data2/load and /data2/query directories on the client node.

Ingestion Performance

Ingestion Performance by Scenario

Comparison of ingestion performance in each scenario

In all five scenarios, the ingestion performance of TDengine exceeded that of TimescaleDB and InfluxDB.

  • Compared with TimescaleDB, the performance of TDengine ranged from 1.04 times higher in Scenario 5 to 3.3 times higher in Scenario 2.
  • Compared with InfluxDB, the performance of TDengine ranged from 1.8 times higher in Scenario 3 to 16 times higher in Scenario 5.

Performance deteriorated on all systems as the number of devices was increased. TimescaleDB displayed lower performance than InfluxDB with small datasets but surpassed InfluxDB with larger datasets. This is consistent with the TSBS evaluation published by Timescale.

TDengine ingestion performance compared with InfluxDB and TimescaleDB

Resource Consumption During Ingestion

The speed of data ingestion alone is not a complete reflection of the overall performance of the three systems in all scenarios. For a more comprehensive view, the overall load was monitored during the ingestion process for Scenario 4 on the client and server to determine the amount of resources used by each system to ingest data. The metrics measured were CPU usage and disk IOPS on the server and CPU usage on the client.

Server CPU Usage

Server CPU usage during ingestion

The figure shows the CPU load on the server while the data from Scenario 4 was being ingested. All three products consume server resources to process the data even after notifying the client that the ingestion has been completed.

TimescaleDB completed data ingestion and notified the client after approximately 70 seconds but continued using CPU resources to compress and order the ingested data. Although CPU usage during post-ingestion processing reached only half of the peak usage, this usage continued for a significant amount of time, almost 4 times that required by ingestion itself.

InfluxDB took longer to ingest the data and used a far higher amount of CPU resources than the other two products, even reaching 100% usage at times.

In comparison, TDengine used a relatively small amount of CPU resources, remaining under 17% usage during ingestion and processing. This shows that TDengine’s unique data model enables not only higher ingestion performance, but also lower resource usage on the server side.

Disk I/O Comparison

Server I/O usage during ingestion

The figure shows the disk I/O status during the ingestion of the data for Scenario 4, indicating that disk activity and CPU usage are directly related in terms of time.

When writing the same data, TDengine at 125 MiB per second and 3,000 IOPS required far fewer disk operations than TimescaleDB or InfluxDB. As seen from the figure, disk I/O can be a bottleneck for ingestion performance. InfluxDB used the maximum available disk resources for a significant period of time during the ingestion process, and even TimescaleDB had higher requirements on disk write performance than TDengine when ingesting the same data.

Client CPU Usage

Client CPU usage during ingestion

TDengine required more CPU resources on the client side than TimescaleDB or InfluxDB, as shown in the figure above. InfluxDB displayed the lowest CPU usage on the client, essentially using only server-side CPU resources for ingestion. However, this can cause server resources to become a bottleneck for ingestion performance.

TimescaleDB used less client-side CPU than InfluxDB, reaching a maximum of 17%. TDengine used up to 56% of the CPU resources on the client, though its ingestion process finished more quickly than the other products.

Taking ingestion time into consideration, TDengine required twice as many client-side CPU resources as TimescaleDB. Even so, because it completed ingestion much faster than the other products, TDengine used fewer CPU resources in total.

Ingestion Performance Summary

In all five scenarios, the ingestion performance of TDengine exceeded that of TimescaleDB and InfluxDB. While providing the highest performance, TDengine also used the fewest CPU and disk resources overall, even considering the fact that CPU usage on the client was higher.

Storage Space

Disk space used to store the dataset. Note: smaller numbers indicate better performance.

After all data had been ingested and processed, the amount of disk space required by each product to store the data was calculated. TimescaleDB used much more storage space than InfluxDB or TDengine in all five scenarios, and the difference increased with the size of the dataset. In Scenario 4 and Scenario 5, TimescaleDB required over 11 times the storage space of TDengine. In the first three scenarios, InfluxDB and TDengine used approximately the same disk space. However, in Scenario 4 and Scenario 5, InfluxDB required more than 2 times the disk space of TDengine.

The following table shows the size of the datasets in TimescaleDB before and after compression. In the first three scenarios, the compression ratio is as expected. However, in Scenario 4 and 5, the compressed data is actually significantly larger than the uncompressed data, which may indicate a bug.

Dataset size in TimescaleDB before and after compression

To evaluate the query performance of the three systems, Scenario 1 (with only 4 days of data as specified in Timescale’s performance evaluation and Scenario 2 were used as the base datasets. Prior to initiating the evaluation, TimescaleDB was configured with 8 chunks as recommended in Timescale’s performance evaluation}for optimal performance. In addition, the time-series index (TSI) was enabled for InfluxDB. The default configuration was used for TDengine, with 1 vnode in Scenario 1 and 6 vnodes in Scenario 2.

Query Performance in Scenario 2

Considering that the response time for certain single queries is particularly short, each TSBS query was performed 2,000 times in Scenario 1 and 500 times in Scenario 2 in order to obtain more accurate and stable results. The TSBS framework automatically calculated all response times for these queries, and the average response time was recorded in this evaluation. During the query performance evaluation, the number of workers on the client was set to 4.

The average response times for queries on the Scenario 2 dataset are listed below.

Query performance in Scenario 2 (latency in milliseconds)
Group 1 query latency in Scenario 2. Note: smaller numbers indicate higher performance.

In TDengine, one table is created for each truck, and the LAST_ROW() function is used to query the latest data. This enables TDengine to provide superior performance to InfluxDB and TimescaleDB.

Group 2 query latency in Scenario 2. Note: smaller numbers indicate higher performance.

In these more complex queries, the performance advantage of TDengine becomes more evident. For the long-driving-sessions and long-daily-sessions queries, which are aggregate queries over a time window, TimescaleDB experiences significant latency. TDengine responded 8 times faster than TimescaleDB and 132 times faster than InfluxDB in the stationary-trucks category. In the long-daily-sessions category, its performance was 87 times that of TimescaleDB and 6.5 times that of InfluxDB.

Group 3 query latency in Scenario 2. Note: smaller numbers indicate higher performance.
Group 4 query latency in Scenario 2. Note: smaller numbers indicate higher performance.

In the most complex queries, TDengine showed significantly better performance. Its latency for the avg-load and breakdown-frequency queries was 426 and 53 times better, respectively, than InfluxDB. Its latency for the daily-activity and avg-load queries was 34 and 23 times better, respectively, than TimescaleDB.

Resource Consumption

Because the response time for some queries was particularly short, it is not possible to obtain a comprehensive view of resource consumption under the conditions prescribed by TSBS. To measure resource consumption in this evaluation, the daily-activity queries were run 50 times, and the CPU, memory, and network bandwidth resources used by each product were recorded.

Server CPU usage during query

As shown in the figure, the CPU usage of all three products was relatively stable during the query process. CPU usage in TDengine peaked at 70%, and TimescaleDB had the lowest peak usage at 22%. InfluxDB had an average of 98% usage, often briefly reaching 100%.

Although the peak usage of TimescaleDB was lowest, its overall usage was highest because it took longest to complete the query process. InfluxDB used almost 100% of the CPU resources and took three times as long as TDengine to complete, making its overall CPU usage the second highest. TDengine completed all queries in one-thirtieth the time of TimescaleDB and used the lowest overall CPU resources.

Server memory usage during query

As shown in the figure, TDengine occupied a stable amount of memory, around 12 GB, during the query process. The memory usage of TimescaleDB and InfluxDB was also stable at around 10 GB. TimescaleDB used a relatively large amount of buffer and cache.

Network bandwidth usage during query

As shown in the figure, the network bandwidth used by each product had a direct relationship to its CPU usage. TDengine used the most bandwidth because it processed the queries in the shortest time and therefore sent more information to the client. TimescaleDB and InfluxDB used a similar amount of bandwidth.

Query Performance in Scenario 1

The query performance of each product with the dataset from Scenario 1 is listed in the following table.

Query performance in Scenario 1 (latency in milliseconds

As shown in the table, it can be seen from this evaluation of the smallest dataset that TDengine still delivers the highest query performance, up to 16 times that of TimescaleDB and 155 times that of InfluxDB.

Conclusion

This TSBS evaluation demonstrates that TDengine, thanks to its architecture designed to the characteristics of time-series data, provides higher ingestion and query performance than TimescaleDB and InfluxDB and uses fewer resources to ingest and query data. This evaluation is based on the TSBS IoT use case and builds on the DevOps CPU-only use case results previously published. The performance of TDengine in this use case speaks to its suitability in Industrial Internet of Things (IIoT) applications.

Extended Evaluation

To provide a more comprehensive evaluation of the ingestion performance of TDengine under the default configuration, the same tests were performed on TDengine after modifying two parameters that may affect ingestion performance.

Number of Vnodes

The ingestion performance of TDengine was evaluated with 12, 18, and 24 vnodes in the deployment.

TDengine ingestion performance with different numbers of vnodes

Adjusting the number of vnodes in TDengine did not significantly affect ingestion performance in Scenarios 1, 2, and 3. In Scenarios 4 and 5, however, adding vnodes to TDengine clearly increased ingestion performance. For use cases with large numbers of devices, configuring a larger number of vnodes enables TDengine to offer higher ingestion performance.

Records per Device

TDengine creates one table for every device, which lengthens the time required for preparation prior to ingestion. As the number of data records per table increases in size, this preparation time occupies a smaller percentage of the overall ingestion time; therefore, ingestion performance may increase with the number of records per device. In Scenario 4, only 18 records were written for each of the 1 million devices in the scenario. The following figure shows how the overhead associated with table creation decreases and overall performance increases with 36, 72, 144, and 576 records per device in this scenario. Note that due to intentionally missing data in this use case, the number of records per device is not exact.

Ingestion performance after increasing records per device

In this test, TDengine was evaluated with 12 vnodes and with 24 vnodes. As shown in the figure, the 24-vnode TDengine deployment provides higher performance in all scenarios and continues to increase in performance as the number of records per device rises, unlike the other systems evaluated. With 12 vnodes processing 576 tuples per device, for a total of 1,037,267,808 records, ingestion performance was 2,827,696.64 metrics per second. This was 6.1 times faster than TimescaleDB and 4.6 times faster than InfluxDB. However, unlike the DevOps CPU-only use case, a significant performance advantage was not gained from increasing the vnodes to 24.

TimescaleDB showed significant performance deterioration with more than 72 records per device. The performance of InfluxDB remained stable as the number of records per device increased.

Summary

The TSBS framework was extended to provide a more comprehensive evaluation of TDengine by modifying the vgroups parameter in TDengine and changing the scale of the dataset to have more records per device. This extended testing indicates that simple configuration changes enable TDengine to ingest large-scale data (in terms of number of devices and records per device) at higher rates. In addition, TDengine offered better performance than TimescaleDB or InfluxDB in large-scale scenarios while using fewer server-side CPU and disk resources.

  • Jim Fan
    Jim Fan

    Jim Fan is the VP of Product at TDengine. With a Master's Degree in Engineering from the University of Michigan and over 12 years of experience in manufacturing and Industrial IoT spaces, he brings expertise in digital transformation, smart manufacturing, autonomous driving, and renewable energy to drive TDengine's solution strategy. Prior to joining TDengine, he worked as the Director of Product Marketing for PTC's IoT Division and Hexagon's Smart Manufacturing Division. He is currently based in California, USA.