TDengine vs. InfluxDB 3: TSBS Performance Results

Jim Fan

May 22, 2025 /

1. Introduction

The sheer volume and complex nature of time-series data have given rise to the purpose-built time-series database (TSDB). While performance is crucial for all databases, the size and complexity of time-series data make speed, precision, and scalability especially important.

The performance of your TSDB doesn’t just impact your ability to ingest, store, and analyze large amounts of data; it directly affects your total cost of ownership (TCO). Better ingestion rates, query response times, and compression ratios mean your system consumes fewer resources to process the same amount of data.

To demonstrate TDengine’s high performance, we evaluated the platform against the latest InfluxDB 3 Core. This benchmark report shows that TDengine achieves superior performance across different real-world time-series datasets, namely:

  • 6.0x to 20.7x faster data ingestion
  • 2.3x to 25.8x higher compression ratios
  • 7.6x to 321.6x shorter query response times

2. Methodology

To ensure complete transparency and reproducibility, all test code and procedures are publicly available on GitHub. The testing scenarios are taken from the independent and open-source TSBS framework.

2.1. Time Series Benchmark Suite (TSBS)

Time Series Benchmark Suite (TSBS) is an open-source performance testing platform for time-series data. Originally developed by InfluxData and now maintained by Timescale, the TSBS framework includes data generation and ingestion, query processing, and automated result aggregation for IoT and DevOps use cases. It has been used by a number of database providers, including InfluxData, Timescale, QuestDB, ClickHouse, VictoriaMetrics, and Redis as a benchmarking tool for performance testing.

TSBS currently includes two use cases, one simulating server monitoring in a data center (referred to as the DevOps use case) and another simulating fleet management for a logistics enterprise (referred to as the IoT use case). These use cases are described in detail in the following sections. In this report, both use cases in the TSBS framework were used to assess the performance of TDengine and InfluxDB in an objective, accurate, and verifiable manner.

2.2. Test Scenarios

TSBS does not define standard test scenarios but allows the user to generate desired scenarios by inputting the use case, pseudo-random number generator (PRNG) seed, number of devices, time range of test data, interval between data points, and database system. TSBS generates test data randomly but in a deterministic manner such that inputting the same seed will generate the same set of data each time. The scenarios used in this report follow Timescale with the exception that the time ranges have been adjusted.

Scenario 1Scenario 2Scenario 3Scenario 4
Devices1004,000100,0001 million
Duration2 days2 days3 hours3 minutes
Interval10 seconds10 seconds10 seconds10 seconds
Rows per device (IoT)15,54915,55897216
Total rows (IoT)3,109,944124,466,978194,487,99732,414,619
Rows per device (DevOps)17,28017,2801,08018
Total rows (DevOps)1,728,00069,120,000108,000,00018,000,000

2.3. IoT Use Case

The IoT use case simulates the data generated by a group of trucks operated by a logistics company. The diagnostics data for these trucks includes one nanosecond-level timestamp, three metrics, and eight tags. The readings data for the trucks includes one nanosecond-level timestamp, seven metrics, and eight tags. The generated datasets may include out-of-order or missing data, intended to simulate scenarios in which trucks may be offline for some time.

A sample data record is described in the following figures.

Sample diagnostics data point in the IoT use case
Sample readings data point in the IoT use case

The metrics in these tables are randomly generated within the following ranges:

  • fuel_state: floating-point number between 0 and 1.0
  • current_load: floating-point number between 0 and 5000.0
  • status: integer 0 or 1
  • latitude: floating-point number between –90.0 and 90.0
  • longitude: floating-point number between –180.0 and 180.0
  • elevation: floating-point number between 0 and 5000.0
  • velocity: floating-point number between 0 and 100
  • heading: floating-point number between 0 and 360.0
  • grade: floating-point number between 0 and 100.0
  • fuel_consumption: floating-point number between 0 and 50

2.4. DevOps Use Case

Note that the CPU-only subset of the DevOps use case was selected in this report. This use case simulates the data generated by CPU monitoring, recording 10 metrics and 10 tags per CPU with a nanosecond-precision timestamp. The generated datasets do not include null or out-of-order data.

A sample data record is described in the following figure.

Sample readings data point in the IoT use case

The metrics in this table are all randomly generated floating-point numbers ranging from 0 to 100.

3. Test Environment

All tests described in this report were run on servers with the following specifications located in Amazon Web Services (AWS):

  • CPU: Intel® Xeon® CPU E5-2650 v3 @ 2.30GHz (40 cores)
  • Memory: 251 GB of DDR4 synchronous registered (buffered) RAM at 2133 MT/s
  • Operating system: Ubuntu 22.04 LTS

The following versions of TDengine and InfluxDB were tested:

  • TDengine OSS 3.3.6.3, gitinfo b6a63a76f552b4afb467eb970043471ffa8acfda
  • InfluxDB Core 3.0.0, revision 3b602eead2bb27aee74fb3cfc45f6be806d3b836

3.1. Configuring TDengine

The TDengine server was configured with six vgroups. The default values were retained for all other parameters.

For the TSBS IoT dataset used in this evaluation, one supertable was created for readings and another for diagnostics. Then one subtable was created in each supertable for each vehicle. The value of the name tag for each truck is also used as the name of the subtable, with the prefix d for the diagnostics supertable and r for the readings supertable.

For the DevOps CPU-only dataset used in this evaluation, one supertable was created for all CPUs. A subtable was then created for each CPU. The value of the hostname tag for each CPU is also used as the name of the subtable.

3.2. Configuring InfluxDB

The InfluxDB server was started as follows:

influxdb3 serve --node-id=local01 --object-store=file --data-dir /data/influx --http-bind=0.0.0.0:8081

This specifies that Parquet files are stored on the filesystem instead of in memory. The default values were retained for all other parameters.

Data was then generated using the standard settings in the TSBS framework:

tsbs_generate_data --use-case="iot" --seed=123 --scale=4000 --timestamp-start="2016-01-01T00:00:00Z" --timestamp-end="2016-01-01T01:00:00Z" --log-interval="10s" --format="influx" > /data/influx/influxdb_iot.out

4. Data Model

4.1. IoT Diagnostics Table Schema in TDengine

4.2. IoT Diagnostics Table Schema in InfluxDB

4.3. IoT Readings Table Schema in TDengine

4.4. IoT Readings Table Schema in InfluxDB

4.5. DevOps Table Schema in TDengine

4.6. DevOps Table Schema in InfluxDB

5. Test Results

5.1. Data Ingestion

Time-series databases need to ingest massive amounts of data, and TDengine achieves the fastest ingestion speeds across all TSBS scenarios, ranging from to 6.0 to 20.7 times the speed of InfluxDB Core. At the same time, TDengine delivers compression ratios 2.3x to 25.8x superior to InfluxDB while using fewer system resources.

5.1.1. Ingestion Speed

Ingestion speed for IoT use case (higher is better)
InfluxDB CoreTDengine OSSTDengine vs. InfluxDB
100 devices947,584.1910,739,729.151133.38%
4,000 devices941,861.158,717,632.78925.58%
100,000 devices878,718.806,693,731.34761.76%
1 million devices722,307.804,365,177.36604.34%
Ingestion speed for DevOps use case (higher is better)
InfluxDB CoreTDengine OSSTDengine vs. InfluxDB
100 devices784,327.2716,244,510.832071.14%
4,000 devices663,338.3512,230,268.821843.75%
100,000 devices633,864.2311,257,054.781775.94%
1 million devices644,847.677,841,434.321216.01%

5.1.2. Disk Space Usage

Disk space occupied for IoT use case scenarios (lower is better)
InfluxDB CoreTDengine OSSInfluxDB vs. TDengine
100 devices349 MB47 MB742.55%
4,000 devices7424 MB1846 MB402.17%
100,000 devices15929 MB3146 MB506.33%
1 million devices3318 MB1423 MB233.17%
Disk space occupied for DevOps use case scenarios (lower is better)
InfluxDB CoreTDengine OSSInfluxDB vs. TDengine
100 devices194 MB8 MB2425.00%
4,000 devices7909 MB306 MB2584.64%
100,000 devices7591 MB720 MB1054.31%
1 million devices1858 MB706 MB263.17%

TDengine required less disk space to store the TSBS datasets in all scenarios and use cases. Its compression performance ranged from 2.3 times to 25.8 times better than InfluxDB with significantly higher efficiency in the 100,000 devices and smaller-scale categories.

Compression ratios for each database were calculated based on the raw and compressed data sizes.

Raw DataInfluxDBTDengine
IoT Scenario 11,381 MB3.96:129.38:1
IoT Scenario 255,263 MB7.44:129.94:1
IoT Scenario 386,353 MB5.42:127.45:1
IoT Scenario 414,392 MB4.34:110.11:1
DevOps Scenario 1670 MB3.46:183.81:1
DevOps Scenario 226,819 MB 3.39:187.64:1
DevOps Scenario 341,904 MB5.52:158.20:1
DevOps Scenario 46,984 MB3.76:19.89:1

InfluxDB achieved compression ratios from 3.39:1 to 7.44:1 while TDengine’s compression performance ranges from 9.89:1 to 87.64:1. TDengine was especially effective at compressing the integer metrics in the DevOps scenario.

5.1.3. Resource Consumption

CPU usage during ingestion and compression of the IoT dataset in Scenario 3
Memory usage during ingestion and compression of the IoT dataset in Scenario 3
CPU usage during ingestion and compression of the DevOps dataset in Scenario 3
Memory usage during ingestion and compression of the DevOps dataset in Scenario 3
  • During ingestion and compression, InfluxDB used between 15% and 20% of CPU resources and 12 GB to 23 GB of memory.
  • CPU and memory resources were mostly used at a consistent rate throughout the ingestion and compression period.
  • In both use cases, InfluxDB experienced a spike to over 40% CPU and 29 GB of memory when beginning to process the ingested data.
  • TDengine had higher average usage at 40% CPU and 42 GB of memory in the IoT use case and 28% CPU and 34 GB of memory in the DevOps use case.
  • TDengine’s total resource usage was significantly lower because all ingestion and processing was completed within five minutes, whereas InfluxDB took almost 30 minutes to ingest and process the same dataset.

5.2. Query Performance

As performance can differ based on a number of factors, the TSBS framework covers a wide range of query types. TDengine provided the fastest query response across all scenarios, confirming that organizations dependent on real-time analytics are best served with this purpose-built platform.

Average query response time for IoT use case (lower is better)
InfluxDB CoreTDengine OSSTDengine vs. InfluxDB
last-loc363.152.3715322.78%
low-fuel270.7513.302035.71%
high-load343.142.3814417.65%
stationary-trucks7.48
long-driving-sessions9.29
long-daily-sessions18.70
avg-vs-projected
-fuel-consumption
989.96101.51975.23%
avg-daily
-driving-duration
82.54
avg-daily
-driving-session
57.31
avg-load4007.8312.4632165.57%
daily-activity63.39
breakdown-frequency1217.71124.28979.81%
Average query response time for DevOps use case (lower is better)
InfluxDB CoreTDengine OSSTDengine vs. InfluxDB
single-groupby-1-1-134.171.911789.01%
single-groupby-1-1-12259.483.507413.71%
single-groupby-1-8-136.602.891266.44%
single-groupby-5-1-138.142.291665.50%
single-groupby-5-1-12282.745.025632.27%
single-groupby-5-8-141.314.011030.17%
cpu-max-all-1197.982.836995.76%
cpu-max-all-8207.626.203348.71%
double-groupby-1455.9321.672103.97%
double-groupby-5486.0440.401203.07%
double-groupby-all521.1461.73844.22%
high-cpu-1556.823.4316233.82%
high-cpu-all843.93108.75776.03%
groupby-orderby-limit709.8710.746609.59%
lastpoint1876.749.3420093.58%

TDengine returned results for all simpler queries in under 20 milliseconds, while InfluxDB 3 Core was at least 10x slower. More complex queries allowed TDengine to show off its processing power, reaching 66x the performance of InfluxDB 3 Core in the groupby-orderby-limit scenario. This demonstrates that TDengine is best prepared to handle the most performance-intensive queries without slowing down. 

Notably, six of the twelve IoT query sets failed to run at all in InfluxDB 3 Core, either throwing an error or returning no results. This indicates that InfluxQL compatibility is still an issue for InfluxDB 3, and it is hoped that more complete results can be obtained once this is resolved.

6. Analysis

With version 3.0, InfluxDB uses the Apache Parquet file format for storing data. Parquet includes a range of built-in encoding and compression options, but these are not configurable through InfluxDB. The specific encoding and compression algorithms used by InfluxDB for each column in this test are therefore not known.

TDengine’s encoding and compression options are configurable on a per-column basis. The default values are determined based on the data type of the column and have been optimized to provide the best compression performance for that data type. In this test, the default values have been used for all columns.

It is possible that TDengine’s superior compression performance in this test is due to more optimal default values for encoding and compression algorithms or more optimal implementation of those algorithms, but the algorithms available are similar in TDengine and Parquet.

TDengine’s “one table per device” design likely played a larger role in improving compression performance. In this design, one table is created for each device, ensuring that each block of data contains the records for a single table. This ensures that similar data is stored together and can greatly increase compressibility in many time-series scenarios where adjacent values differ only by a small amount.

The drawback of this model is that when datasets contain a small amount of data from a large number of devices, there is significant storage overhead caused by table creation. This is reflected in the results for Scenario 4, including 1 million devices with fewer than 20 records each, which had lower compression performance compared with the scenarios including a larger number of records per device.

7. Reproducing These Results

We encourage you to verify these results and have developed a script with which you can run TSBS tests on your own machine. On an Ubuntu 22 machine, clone our TSBS fork to the /usr/local/src directory. Then open the scripts/tsdbComp directory and run the tsbs_test.sh --help command as the root user.

sudo -s
git clone https://github.com/taosdata/tsbs
cd tsbs
git checkout enh/add-influxdb3.0
cd scripts/tsdbComp
./tsbs_test.sh --help

This describes the scenarios that you can test and the configuration options available to you.

Note that performance testing by nature requires machines with adequate hardware.

  • If you would like to run the full test suite for the DevOps or IoT use case, use a server with at least 24 cores, 128 GB of RAM, and 500 GB of disk space.
  • If you prefer to run the tests on a personal computer or smaller virtual machine, select the cputest or iottest scenarios. These scenarios run a subset of TSBS that can return results within 45 minutes on most computers. For these scenarios, a machine with 4 cores, 8 GB of RAM, and 40 GB of disk space is required.

8. Conclusion

Across all key test metrics for ingestion and querying, TDengine clearly emerges as the highest-performing time series database.

  • Ingestion: Although InfluxDB 3 Core’s performance is more stable than older versions when handling larger datasets, its ingestion rates still lag behind modern time-series databases like TDengine.
  • Queries: TDengine has the fastest query response time across all scenarios. InfluxDB Core returned results significantly slower than TDengine in all scenarios and failed to run several IoT queries due to incompatibility.

The performance advantages shown by this evaluation indicate that TDengine excels at time-series data processing, especially with larger datasets and more complex queries. These advantages, combined with its comprehensive feature set and ease of use, make TDengine the best option for growing enterprises to scale their data pipelines.

  • Jim Fan
    Jim Fan

    Jim Fan is the VP of Product at TDengine. With a Master's Degree in Engineering from the University of Michigan and over 12 years of experience in manufacturing and Industrial IoT spaces, he brings expertise in digital transformation, smart manufacturing, autonomous driving, and renewable energy to drive TDengine's solution strategy. Prior to joining TDengine, he worked as the Director of Product Marketing for PTC's IoT Division and Hexagon's Smart Manufacturing Division. He is currently based in California, USA.