During the product selection process for a data historian, it can be difficult to confirm that the performance of a product is acceptable for your use case. Some time-series databases may perform well with smaller datasets but begin to experience performance deterioration as your data increases in scale or with high cardinality. In other cases, such as a proof of concept, it may be inconvenient or impossible to run tests on your own data to obtain realistic results.
To assist users in testing the performance of TDengine, the taosBenchmark utility is included in all client and server packages. This utility generates a sample dataset that simulates a smart meter monitoring scenario. You can use this dataset to test the features of TDengine and run queries similar to those that you would use in production.
The taosBenchmark utility is installed automatically when you deploy a TDengine OSS or TDengine Enterprise client or server. If you are using TDengine Cloud only, you can download the TDengine OSS client package and install it on your local machine to use taosBenchmark with your cloud instances. To download TDengine OSS, see Get Started.
Generate a Sample Dataset
To generate a sample dataset, simply run
taosBenchmark from a terminal, using the
-h command-line parameter if you are connecting to a remote host. Note that taosBenchmark will delete and re-create the
test database on your TDengine deployment, so ensure that no important data is located in this database before generating your sample dataset. To connect taosBenchmark to a TDengine Cloud instance, set the
TDENGINE_CLOUD_DSN environment variable and create the database as described in the documentation.
By default, taosBenchmark creates a sample dataset as follows:
- taosBenchmark creates the
meterssupertable, and 10,000 tables named
meterssupertable contains the following columns:
- timestamp column
- metric columns
- tag columns
- timestamp column
- Each table from d0 to d9999 is populated with random data in each column.
After the dataset has been generated, you can run queries on the sample data to test the performance of TDengine and verify whether it would be suitable for your use case. In addition, taosBenchmark is highly configurable, and you can control a number of settings with command-line parameters — for example, generating out-of-order data, other metric or tag columns, and more or fewer tables. For detailed information about configuring taosBenchmark, see the documentation.