1. Introduction
The sheer volume and complex nature of time-series data have given rise to the purpose-built time-series database (TSDB). While performance is crucial for all databases, the size and complexity of time-series data make speed, precision, and scalability especially important.
The performance of your TSDB doesn’t just impact your ability to ingest, store, and analyze large amounts of data; it directly affects your total cost of ownership (TCO). Better ingestion rates, query response times, and compression ratios mean your system consumes fewer resources to process the same amount of data.
To demonstrate TDengine’s high performance, we evaluated the platform against the latest InfluxDB 3 Core. This benchmark report shows that TDengine achieves superior performance across different real-world time-series datasets, namely:
- 6.0x to 20.7x faster data ingestion
- 2.3x to 25.8x higher compression ratios
- 7.6x to 321.6x shorter query response times
2. Methodology
To ensure complete transparency and reproducibility, all test code and procedures are publicly available on GitHub. The testing scenarios are taken from the independent and open-source TSBS framework.
2.1. Time Series Benchmark Suite (TSBS)
Time Series Benchmark Suite (TSBS) is an open-source performance testing platform for time-series data. Originally developed by InfluxData and now maintained by Timescale, the TSBS framework includes data generation and ingestion, query processing, and automated result aggregation for IoT and DevOps use cases. It has been used by a number of database providers, including InfluxData, Timescale, QuestDB, ClickHouse, VictoriaMetrics, and Redis as a benchmarking tool for performance testing.
TSBS currently includes two use cases, one simulating server monitoring in a data center (referred to as the DevOps use case) and another simulating fleet management for a logistics enterprise (referred to as the IoT use case). These use cases are described in detail in the following sections. In this report, both use cases in the TSBS framework were used to assess the performance of TDengine and InfluxDB in an objective, accurate, and verifiable manner.
2.2. Test Scenarios
TSBS does not define standard test scenarios but allows the user to generate desired scenarios by inputting the use case, pseudo-random number generator (PRNG) seed, number of devices, time range of test data, interval between data points, and database system. TSBS generates test data randomly but in a deterministic manner such that inputting the same seed will generate the same set of data each time. The scenarios used in this report follow Timescale with the exception that the time ranges have been adjusted.
Scenario 1 | Scenario 2 | Scenario 3 | Scenario 4 | |
---|---|---|---|---|
Devices | 100 | 4,000 | 100,000 | 1 million |
Duration | 2 days | 2 days | 3 hours | 3 minutes |
Interval | 10 seconds | 10 seconds | 10 seconds | 10 seconds |
Rows per device (IoT) | 15,549 | 15,558 | 972 | 16 |
Total rows (IoT) | 3,109,944 | 124,466,978 | 194,487,997 | 32,414,619 |
Rows per device (DevOps) | 17,280 | 17,280 | 1,080 | 18 |
Total rows (DevOps) | 1,728,000 | 69,120,000 | 108,000,000 | 18,000,000 |
2.3. IoT Use Case
The IoT use case simulates the data generated by a group of trucks operated by a logistics company. The diagnostics data for these trucks includes one nanosecond-level timestamp, three metrics, and eight tags. The readings data for the trucks includes one nanosecond-level timestamp, seven metrics, and eight tags. The generated datasets may include out-of-order or missing data, intended to simulate scenarios in which trucks may be offline for some time.
A sample data record is described in the following figures.


The metrics in these tables are randomly generated within the following ranges:
- fuel_state: floating-point number between 0 and 1.0
- current_load: floating-point number between 0 and 5000.0
- status: integer 0 or 1
- latitude: floating-point number between –90.0 and 90.0
- longitude: floating-point number between –180.0 and 180.0
- elevation: floating-point number between 0 and 5000.0
- velocity: floating-point number between 0 and 100
- heading: floating-point number between 0 and 360.0
- grade: floating-point number between 0 and 100.0
- fuel_consumption: floating-point number between 0 and 50
2.4. DevOps Use Case
Note that the CPU-only subset of the DevOps use case was selected in this report. This use case simulates the data generated by CPU monitoring, recording 10 metrics and 10 tags per CPU with a nanosecond-precision timestamp. The generated datasets do not include null or out-of-order data.
A sample data record is described in the following figure.

Sample readings data point in the IoT use case
The metrics in this table are all randomly generated floating-point numbers ranging from 0 to 100.
3. Test Environment
All tests described in this report were run on servers with the following specifications located in Amazon Web Services (AWS):
- CPU: Intel® Xeon® CPU E5-2650 v3 @ 2.30GHz (40 cores)
- Memory: 251 GB of DDR4 synchronous registered (buffered) RAM at 2133 MT/s
- Operating system: Ubuntu 22.04 LTS
The following versions of TDengine and InfluxDB were tested:
- TDengine OSS 3.3.6.3, gitinfo b6a63a76f552b4afb467eb970043471ffa8acfda
- InfluxDB Core 3.0.0, revision 3b602eead2bb27aee74fb3cfc45f6be806d3b836
3.1. Configuring TDengine
The TDengine server was configured with six vgroups. The default values were retained for all other parameters.
For the TSBS IoT dataset used in this evaluation, one supertable was created for readings and another for diagnostics. Then one subtable was created in each supertable for each vehicle. The value of the name tag for each truck is also used as the name of the subtable, with the prefix d
for the diagnostics supertable and r
for the readings supertable.
For the DevOps CPU-only dataset used in this evaluation, one supertable was created for all CPUs. A subtable was then created for each CPU. The value of the hostname tag for each CPU is also used as the name of the subtable.
3.2. Configuring InfluxDB
The InfluxDB server was started as follows:
influxdb3 serve --node-id=local01 --object-store=file --data-dir /data/influx --http-bind=0.0.0.0:8081
This specifies that Parquet files are stored on the filesystem instead of in memory. The default values were retained for all other parameters.
Data was then generated using the standard settings in the TSBS framework:
tsbs_generate_data --use-case="iot" --seed=123 --scale=4000 --timestamp-start="2016-01-01T00:00:00Z" --timestamp-end="2016-01-01T01:00:00Z" --log-interval="10s" --format="influx" > /data/influx/influxdb_iot.out
4. Data Model
4.1. IoT Diagnostics Table Schema in TDengine

4.2. IoT Diagnostics Table Schema in InfluxDB

4.3. IoT Readings Table Schema in TDengine

4.4. IoT Readings Table Schema in InfluxDB

4.5. DevOps Table Schema in TDengine

4.6. DevOps Table Schema in InfluxDB

5. Test Results
5.1. Data Ingestion
Time-series databases need to ingest massive amounts of data, and TDengine achieves the fastest ingestion speeds across all TSBS scenarios, ranging from to 6.0 to 20.7 times the speed of InfluxDB Core. At the same time, TDengine delivers compression ratios 2.3x to 25.8x superior to InfluxDB while using fewer system resources.
5.1.1. Ingestion Speed

InfluxDB Core | TDengine OSS | TDengine vs. InfluxDB | |
---|---|---|---|
100 devices | 947,584.19 | 10,739,729.15 | 1133.38% |
4,000 devices | 941,861.15 | 8,717,632.78 | 925.58% |
100,000 devices | 878,718.80 | 6,693,731.34 | 761.76% |
1 million devices | 722,307.80 | 4,365,177.36 | 604.34% |

InfluxDB Core | TDengine OSS | TDengine vs. InfluxDB | |
---|---|---|---|
100 devices | 784,327.27 | 16,244,510.83 | 2071.14% |
4,000 devices | 663,338.35 | 12,230,268.82 | 1843.75% |
100,000 devices | 633,864.23 | 11,257,054.78 | 1775.94% |
1 million devices | 644,847.67 | 7,841,434.32 | 1216.01% |
5.1.2. Disk Space Usage

InfluxDB Core | TDengine OSS | InfluxDB vs. TDengine | |
---|---|---|---|
100 devices | 349 MB | 47 MB | 742.55% |
4,000 devices | 7424 MB | 1846 MB | 402.17% |
100,000 devices | 15929 MB | 3146 MB | 506.33% |
1 million devices | 3318 MB | 1423 MB | 233.17% |

InfluxDB Core | TDengine OSS | InfluxDB vs. TDengine | |
---|---|---|---|
100 devices | 194 MB | 8 MB | 2425.00% |
4,000 devices | 7909 MB | 306 MB | 2584.64% |
100,000 devices | 7591 MB | 720 MB | 1054.31% |
1 million devices | 1858 MB | 706 MB | 263.17% |
TDengine required less disk space to store the TSBS datasets in all scenarios and use cases. Its compression performance ranged from 2.3 times to 25.8 times better than InfluxDB with significantly higher efficiency in the 100,000 devices and smaller-scale categories.
Compression ratios for each database were calculated based on the raw and compressed data sizes.
Raw Data | InfluxDB | TDengine | |
---|---|---|---|
IoT Scenario 1 | 1,381 MB | 3.96:1 | 29.38:1 |
IoT Scenario 2 | 55,263 MB | 7.44:1 | 29.94:1 |
IoT Scenario 3 | 86,353 MB | 5.42:1 | 27.45:1 |
IoT Scenario 4 | 14,392 MB | 4.34:1 | 10.11:1 |
DevOps Scenario 1 | 670 MB | 3.46:1 | 83.81:1 |
DevOps Scenario 2 | 26,819 MB | 3.39:1 | 87.64:1 |
DevOps Scenario 3 | 41,904 MB | 5.52:1 | 58.20:1 |
DevOps Scenario 4 | 6,984 MB | 3.76:1 | 9.89:1 |
InfluxDB achieved compression ratios from 3.39:1 to 7.44:1 while TDengine’s compression performance ranges from 9.89:1 to 87.64:1. TDengine was especially effective at compressing the integer metrics in the DevOps scenario.
5.1.3. Resource Consumption




- During ingestion and compression, InfluxDB used between 15% and 20% of CPU resources and 12 GB to 23 GB of memory.
- CPU and memory resources were mostly used at a consistent rate throughout the ingestion and compression period.
- In both use cases, InfluxDB experienced a spike to over 40% CPU and 29 GB of memory when beginning to process the ingested data.
- TDengine had higher average usage at 40% CPU and 42 GB of memory in the IoT use case and 28% CPU and 34 GB of memory in the DevOps use case.
- TDengine’s total resource usage was significantly lower because all ingestion and processing was completed within five minutes, whereas InfluxDB took almost 30 minutes to ingest and process the same dataset.
5.2. Query Performance
As performance can differ based on a number of factors, the TSBS framework covers a wide range of query types. TDengine provided the fastest query response across all scenarios, confirming that organizations dependent on real-time analytics are best served with this purpose-built platform.


InfluxDB Core | TDengine OSS | TDengine vs. InfluxDB | |
---|---|---|---|
last-loc | 363.15 | 2.37 | 15322.78% |
low-fuel | 270.75 | 13.30 | 2035.71% |
high-load | 343.14 | 2.38 | 14417.65% |
stationary-trucks | 7.48 | ||
long-driving-sessions | 9.29 | ||
long-daily-sessions | 18.70 | ||
avg-vs-projected -fuel-consumption | 989.96 | 101.51 | 975.23% |
avg-daily -driving-duration | 82.54 | ||
avg-daily -driving-session | 57.31 | ||
avg-load | 4007.83 | 12.46 | 32165.57% |
daily-activity | 63.39 | ||
breakdown-frequency | 1217.71 | 124.28 | 979.81% |



InfluxDB Core | TDengine OSS | TDengine vs. InfluxDB | |
---|---|---|---|
single-groupby-1-1-1 | 34.17 | 1.91 | 1789.01% |
single-groupby-1-1-12 | 259.48 | 3.50 | 7413.71% |
single-groupby-1-8-1 | 36.60 | 2.89 | 1266.44% |
single-groupby-5-1-1 | 38.14 | 2.29 | 1665.50% |
single-groupby-5-1-12 | 282.74 | 5.02 | 5632.27% |
single-groupby-5-8-1 | 41.31 | 4.01 | 1030.17% |
cpu-max-all-1 | 197.98 | 2.83 | 6995.76% |
cpu-max-all-8 | 207.62 | 6.20 | 3348.71% |
double-groupby-1 | 455.93 | 21.67 | 2103.97% |
double-groupby-5 | 486.04 | 40.40 | 1203.07% |
double-groupby-all | 521.14 | 61.73 | 844.22% |
high-cpu-1 | 556.82 | 3.43 | 16233.82% |
high-cpu-all | 843.93 | 108.75 | 776.03% |
groupby-orderby-limit | 709.87 | 10.74 | 6609.59% |
lastpoint | 1876.74 | 9.34 | 20093.58% |
TDengine returned results for all simpler queries in under 20 milliseconds, while InfluxDB 3 Core was at least 10x slower. More complex queries allowed TDengine to show off its processing power, reaching 66x the performance of InfluxDB 3 Core in the groupby-orderby-limit scenario. This demonstrates that TDengine is best prepared to handle the most performance-intensive queries without slowing down.
Notably, six of the twelve IoT query sets failed to run at all in InfluxDB 3 Core, either throwing an error or returning no results. This indicates that InfluxQL compatibility is still an issue for InfluxDB 3, and it is hoped that more complete results can be obtained once this is resolved.
6. Analysis
With version 3.0, InfluxDB uses the Apache Parquet file format for storing data. Parquet includes a range of built-in encoding and compression options, but these are not configurable through InfluxDB. The specific encoding and compression algorithms used by InfluxDB for each column in this test are therefore not known.
TDengine’s encoding and compression options are configurable on a per-column basis. The default values are determined based on the data type of the column and have been optimized to provide the best compression performance for that data type. In this test, the default values have been used for all columns.
It is possible that TDengine’s superior compression performance in this test is due to more optimal default values for encoding and compression algorithms or more optimal implementation of those algorithms, but the algorithms available are similar in TDengine and Parquet.
TDengine’s “one table per device” design likely played a larger role in improving compression performance. In this design, one table is created for each device, ensuring that each block of data contains the records for a single table. This ensures that similar data is stored together and can greatly increase compressibility in many time-series scenarios where adjacent values differ only by a small amount.
The drawback of this model is that when datasets contain a small amount of data from a large number of devices, there is significant storage overhead caused by table creation. This is reflected in the results for Scenario 4, including 1 million devices with fewer than 20 records each, which had lower compression performance compared with the scenarios including a larger number of records per device.
7. Reproducing These Results
We encourage you to verify these results and have developed a script with which you can run TSBS tests on your own machine. On an Ubuntu 22 machine, clone our TSBS fork to the /usr/local/src
directory. Then open the scripts/tsdbComp
directory and run the tsbs_test.sh --help
command as the root user.
sudo -s
git clone https://github.com/taosdata/tsbs
cd tsbs
git checkout enh/add-influxdb3.0
cd scripts/tsdbComp
./tsbs_test.sh --help
This describes the scenarios that you can test and the configuration options available to you.
Note that performance testing by nature requires machines with adequate hardware.
- If you would like to run the full test suite for the DevOps or IoT use case, use a server with at least 24 cores, 128 GB of RAM, and 500 GB of disk space.
- If you prefer to run the tests on a personal computer or smaller virtual machine, select the
cputest
oriottest
scenarios. These scenarios run a subset of TSBS that can return results within 45 minutes on most computers. For these scenarios, a machine with 4 cores, 8 GB of RAM, and 40 GB of disk space is required.
8. Conclusion
Across all key test metrics for ingestion and querying, TDengine clearly emerges as the highest-performing time series database.
- Ingestion: Although InfluxDB 3 Core’s performance is more stable than older versions when handling larger datasets, its ingestion rates still lag behind modern time-series databases like TDengine.
- Queries: TDengine has the fastest query response time across all scenarios. InfluxDB Core returned results significantly slower than TDengine in all scenarios and failed to run several IoT queries due to incompatibility.
The performance advantages shown by this evaluation indicate that TDengine excels at time-series data processing, especially with larger datasets and more complex queries. These advantages, combined with its comprehensive feature set and ease of use, make TDengine the best option for growing enterprises to scale their data pipelines.