MySQL is one of the most popular relational databases, and many enterprises are using MySQL databases to store data collected from IIoT devices. However, as the number of devices in the environment grows and the demand for real-time data feedback increases, it becomes difficult for MySQL to meet business needs. TDengine can efficiently read data from MySQL and write it into TDengine, enabling migration of historical data as well as synchronization of real-time data.

System Comparison
While MySQL is a well-loved general-purpose database, there are situations where enterprises are better served by deploying a specialized time-series database. The following table describes where each system excels.
TDengine | MySQL | |
---|---|---|
Data Type | Time-series (metrics, logs, telemetry) | Structured (users, orders, inventory) |
Write Pattern | High-frequency, append-only | Mixed reads/writes |
Query Type | Time-based queries, aggregation | Joins, filters, ad-hoc queries |
Storage | High compression, auto retention | General-purpose, manual cleanup |
Scalability | Horizontal (cluster-ready) | Mostly vertical |
Best For | IoT, monitoring, industrial data | Web apps, business systems |
Enterprises are recommended to consider migrating from MySQL to TDengine if their applications generate large volumes of time-series data, such as IoT sensor readings, system metrics, or logs, and MySQL struggles to keep up with the write throughput, storage efficiency, or query performance.
If your use case involves high-frequency ingestion, time-based queries (like aggregations over time windows), or data retention policies, TDengine offers a purpose-built solution with built-in compression, fast ingestion, and automatic data lifecycle management. As your dataset grows and real-time analytics becomes critical, TDengine can deliver significantly better performance and lower costs compared to retrofitting MySQL for time-series workloads.
Procedure
This procedure describes how to replicate data from MySQL to TDengine.
-
Log in to TDengine Explorer, open the Data In tab, and click Add Source.
-
Configure basic information as follows:
-
Name: Enter a unique name for the data replication task.
-
Type: Select MySQL.
-
(Optional) Agent: If needed, select an existing agent from the dropdown menu or click Create New Agent.
-
Target: Specify the TDengine database to which you want to write data from MySQL. If you do not have a database prepared, click Create Database.
-
-
Under Connection Configuration, enter the hostname and port number of your MySQL deployment along with the MySQL database that you want to replicate to TDengine.
-
Under Authentication Information, enter the username and password with which you want to connect to MySQL. The user you specify must have read permissions in the organization.
-
Configure connection options as follows:
-
Character Set: Set the character set for the connection. The default character set is utf8mb4. MySQL 5.5.3 supports this feature. If connecting to an older version, it is recommended to change to utf8. Options include utf8, utf8mb4, utf16, utf32, gbk, big5, latin1, ascii.
-
SSL Mode: Set whether to negotiate a secure SSL TCP/IP connection with the server or the priority of negotiation. The default value is PREFERRED. Options include DISABLED, PREFERRED, REQUIRED.
Then click Check Connection to verify that you can obtain data from the source MySQL database.
-
-
Configure the SQL query as follows:
Subtable Field is used to split subtables. It is a select distinct SQL statement that queries non-repeated items of specified field combinations, usually corresponding to the tag in transform:
This configuration is mainly to solve the problem of data migration disorder, and it needs to be used together with SQL Template, otherwise it cannot achieve the expected effect, usage examples are as follows:
- Fill in the subtable field statement
select distinct col_name1, col_name2 from table
, which means using the fields col_name1 and col_name2 in the source table to split the subtables of the target supertable - Add subtable field placeholders in the SQL Template, for example, the
${col_name1} and ${col_name2}
part inselect * from table where ts >= ${start} and ts < ${end} and ${col_name1} and ${col_name2}
- Configure
col_name1
andcol_name2
two tag mappings in transform
SQL Template is the SQL statement template used for querying data. The SQL statement must include time range conditions, and the start and end times must appear in pairs. The time range defined in the SQL statement template consists of a column representing time in the source database and the placeholders defined below.
SQL uses different placeholders to represent different time format requirements, specifically the following placeholder formats:
${start}
,${end}
: Represents RFC3339 format timestamps, e.g.: 2024-03-14T08:00:00+0800${start_no_tz}
,${end_no_tz}
: Represents RFC3339 strings without timezone: 2024-03-14T08:00:00${start_date}
,${end_date}
: Represents date only, e.g.: 2024-03-14
To solve the problem of data migration disorder, it is advisable to add sorting conditions in the query statement, such as
order by ts asc
.Start Time: The start time for migrating data, this field is required.
End Time: The end time for migrating data, which can be left blank. If set, the migration task will stop automatically after reaching the end time; if left blank, it will continuously synchronize real-time data and the task will not stop automatically.
Query Interval: The time interval for querying data in segments, default is 1 day. To avoid querying a large amount of data at once, a data synchronization sub-task will use the query interval to segment the data retrieval.
Delay Duration: In real-time data synchronization scenarios, to avoid losing data due to delayed writes, each synchronization task will read data from before the delay duration.
- Fill in the subtable field statement
-
Configure data mapping as follows:
Click the Retrieve from Server button to fetch sample data from the MySQL server.
In Extract or Split from Column, fill in the fields to extract or split from the message body, for example: split the
vValue
field intovValue_0
andvValue_1
, select the split extractor, fill in the separator,
, and number2
.In Filter, fill in the filtering conditions, for example: write
Value > 0
, then only data where Value is greater than 0 will be written to TDengine.In Mapping, select the supertable in TDengine to map to, and the columns to map to the supertable.
Click Preview to view the results of the mapping.
-
(Optional) Configure advanced options as follows:
-
Maximum Read Concurrency: The limit on the number of data source connections or reading threads, modify this parameter when the default parameters do not meet the needs or when adjusting resource usage.
-
Batch Size: The maximum number of messages or rows sent at once. The default is 10000.
-
-
Click Submit to complete the creation of the data replication task from MySQL to TDengine, and return to the Data Source List page to view the task execution status.