TDengine automatically generates real-time dashboards and reports based on collected data—no manual input or configuration required. Even without deep domain expertise, knowledge of SQL, or experience with analytics tools, you can gain a clear understanding of whether operations are running smoothly, where efficiency can be improved, and whether potential risks exist. TDengine significantly lowers the barrier to extracting value from your data.
Comparison with Traditional Analytics and Chat BI
With traditional BI or visualization tools, analyzing data requires a deep understanding of data sources, structures, and field definitions. You need to know how to clean and transform data, understand star and snowflake schemas, manage relationships between fact and dimension tables, and define business metrics. Proficiency in data analysis methods, chart selection, and formatting is essential—as is fluency in SQL or scripting languages like Python or R. On top of that, mastering the tool itself often comes with a steep technical and business learning curve.
The rise of large language models (LLMs) has led to a wave of Chat BI tools. These allow users to describe analyses in natural language and generate dashboards or reports automatically, often with a co-pilot experience to assist in design. This greatly improves efficiency. However, Chat BI still relies heavily on the user’s domain expertise. As the saying goes, “asking the right question is half the answer”—but unfortunately, even domain experts may overlook insights due to limited experience or focus. As a result, data-driven value creation remains out of reach for many users.
TDengine includes Chat BI capabilities but goes a step further. Instead of waiting for users to ask questions, TDengine uses LLMs and contextualized data to automatically detect application scenarios, then proactively recommends relevant real-time analyses, dashboards, or reports. Users can provide feedback by selecting Like or Dislike, helping the system refine its recommendations. Once confirmed, TDengine generates the corresponding configuration and instantly renders the dashboard or report—no manual input required. Compared to the “ask-and-analyze” approach of Chat BI, this feature represents a new paradigm: proactive, AI-powered insight delivery.
How the AI Agent Works
At the core of TDengine’s ability to automatically generate dashboards, reports, and real-time analyses is its built-in, multi-tasking AI agent. The main workflow of this AI agent is as follows:

- The AI Agent retrieves the table structure for each device or logical entity from the data platform. This includes table names, descriptions, column names, data types, units, and other supporting metadata, along with information about each entity’s associated subsystems.
- The AI Agent retrieves the table structure for each device or logical entity from the data platform. This includes table names, descriptions, column names, data types, units, and other supporting metadata, along with information about each entity’s associated subsystems.
- After the LLM generates its response, the AI Agent performs a validity check to filter out any incorrect or unusable outputs.
- Based on the validated response, the AI Agent automatically generates the necessary configuration files for the visualization and reporting modules.
- The visualization/reporting module then retrieves the relevant data from the data platform using the configuration and presents the final result to the user.
How We Did It
At first glance, the workflow above may seem straightforward—and in theory, many could imagine such a process. But in practice, implementing it presents significant engineering and technical challenges. In most data platforms, there are numerous databases and tables; in industrial scenarios, the number of data points can exceed tens of millions, with thousands of different device types. Enabling an LLM to understand the relationships between these databases and tables, and to extract the business meaning of each table and field, is extremely difficult. For complex queries, having an LLM handle text-to-SQL translation is still highly challenging.
So why is TDengine able to do it? There are a few key reasons:
- TDengine uses a unique storage model based on a “one device, one table” approach. If you have one million devices, you create one million tables—one per device. Even if a single device contains multiple subsystems with different sampling frequencies, or if data points are frequently added, removed, or changed, TDengine’s innovative virtual table design still allows the entire device to be represented as a single logical table. Additionally, TDengine introduced the concept of supertables, which simplify aggregation across similar devices by enabling queries on a single supertable instead of multiple individual ones. Together, virtual tables and supertables dramatically reduce the need for JOIN operations and simplify SQL queries—making automated SQL generation by LLMs much more feasible.
- At its core, TDengine is a high-performance, distributed time-series database (TSDB) capable of ingesting, cleaning, transforming, and storing data from a wide range of sources—including MQTT, Kafka, OPC-UA, and OPC-DA. It also features a powerful built-in stream processing engine that supports a variety of trigger types such as tumbling windows, sliding windows, event windows, state windows, session windows, and count windows. The engine enables expression-based computation, time window aggregation, and cross-stream aggregation—and can actively notify applications with trigger events and computed results. Stream processing tasks are defined and managed using standard SQL, making them easy for applications to interact with—and far easier for LLMs to generate automatically.
- Building on its TSDB foundation, TDengine introduces an industrial data management platform that allows users to create a unified data directory and apply both standardization and contextualization to stored data. It supports configurable templates for devices, attributes, dashboards, analyses, and notifications; automatic unit conversions; expression-based calculations; naming patterns; string construction; and cross-table data referencing—all contributing to consistent, standardized data modeling. At the same time, users can enrich each device and attribute with metadata such as descriptions, limits, locations, physical units, and tags—giving the data business meaning and enabling full contextualization. TDengine also provides a hierarchical tree model to organize data assets, making it easier to navigate and to establish relationships between physical or logical entities.
With all of these foundational capabilities, the vast amount of data stored in the TDengine platform becomes an AI-ready dataset. Without these features—such as SQL simplification through supertables and virtual tables, built-in stream processing for real-time analysis, and data standardization and contextualization to add business semantics—automatically generating real-time dashboards and reports would simply not be possible with a generic time-series database.
From Pull to Push: A Paradigm Shift in Data Consumption
TDengine’s innovations and engineering breakthroughs have brought about a fundamental shift in how data is consumed. Traditionally, data analysis has always followed a pull model—users initiate queries (like SQL), and systems respond with results. Now, powered by LLMs, TDengine’s AI agent enables data to speak for itself, proactively pushing insights and analysis to users. This turns data consumption into a push-driven experience, where analysis finds you—lowering the barrier to zero and ushering in a new, “TikTok-style” era of data interaction.
Thanks to a foundation of structured, standardized, and contextualized data—combined with LLM-driven intelligence—TDengine becomes a self-driven, real-time analytics platform. It no longer depends on user expertise or tooling proficiency, and instead operates autonomously. TDengine is leading the way in this transformation, and we believe many more systems will follow this path in the near future.
Looking ahead, TDengine will take the next step by exposing its AI-ready data through open APIs for third-party applications. What it offers is no longer just raw SQL query results, but rich, contextualized, business-aware data outputs—empowering a new generation of AI applications and helping data owners fully unlock the value of their data.
10x Efficiency Gains
This paradigm shift in data consumption delivers exponential improvements in operational efficiency. Traditionally, data analysis relied heavily on collaboration between business experts and data or IT teams. Business leaders often lacked technical skills, while engineers lacked deep domain knowledge—creating a significant gap that slowed down analysis and delayed decision-making. With faster access to insights and a compressed workflow, decisions become not only timelier but also more accurate and impactful.
At the same time, traditional industries like steel, oil, and power often require 5 to 10 years of domain experience before analysts can ask the right questions. With TDengine, much of that barrier is removed. For everyday analysis, users no longer need years of training—just a few days. While advanced analytics still benefits from expert input, the foundation is now far more accessible.
To build an effective industrial or IoT data platform, all you need is to connect your data sources, define governance standards, and use TDengine’s built-in tools to standardize and contextualize your data. Everything else is taken care of—enabling you to move faster, go further, and do more with your data.