AI-Powered Insights: Data That Speaks for Itself

Analytics has always been a pull model: you ask, the system answers. That model rests on two assumptions — you know what to ask, and you have time to ask it. For plant engineers without a data science background, or analysts inheriting an unfamiliar system, neither assumption holds. The deeper problem is that even domain experts are limited by their own blind spots: the questions they never think to ask are often the most valuable ones.

TDengine’s AI-powered insights break this cycle from two directions. The system proactively pushes findings before you ask. When you want to go deeper, you can query at any time. Two engines underpin this: TDgpt, TDengine’s built-in time-series AI engine, handles computation-intensive tasks — anomaly detection, forecasting, missing data imputation — running directly inside the database core. An external large language model, connected via a standard OpenAI-compatible interface, handles natural language understanding, content generation, and multi-step reasoning.

A 15-day trial connection is built in, so AI features are available immediately without any configuration.

Zero-Query Intelligence: Insight Delivered Before You Ask

When you open a device’s Panels tab, AI has already generated a set of visualizations based on that device’s attributes, template type, and historical data — power curves, voltage trends, efficiency metrics — ready without any configuration. Open the Analyses tab and the system has already recommended real-time analysis rules suited to that device; save with one click and they start running. Facing a device you have never worked with before, you open the page and the analysis is already there.

This is not template matching. TDengine IDMP‘s AI agent reads each element’s structural metadata, attribute descriptions, unit configurations, and time-series data; builds a prompt; and calls the large language model to generate panel configurations and analysis rules that can be used directly. The AI also produces an industry-aligned KPI library for each asset class — load factor and voltage stability index for electricity meters, production efficiency and water cut rate for oil wells — each entry accompanied by a calculation formula, TDengine SQL, and a plain-language explanation of business meaning, giving teams a ready-made analytical baseline.

This is possible because TDengine’s data is AI-ready. The “one table per device” data model, the supertable abstraction that reduces cross-device aggregation to a single query, and the business semantics introduced through IDMP’s data contextualization — these foundations give the large language model enough context to understand each individual device, rather than facing a mass of values stripped of meaning.

LLM suggests a list of panels. You can also ask LLM to generate one panel for you

Chat BI: Drive the Entire Platform in Natural Language

Zero-Query Intelligence gives you a starting point, but sometimes you want something more specific or need to initiate an action directly. The Chat BI interface accepts any natural language input related to the system, not just data queries.

You can ask about data: how much less power did a specific turbine generate yesterday compared with the same day last week? At what hour did OEE drop on Friday’s night shift? The AI translates the question into TDengine SQL, queries actual data, and returns an answer that is traceable and verifiable — not a language model inference.

You can also describe a panel you want — “show average hourly voltage over the past 7 days as a line chart” — and AI builds and generates the panel configuration for you. You can describe an analysis rule — “calculate maximum hourly current and alert when it exceeds normal range” — and AI creates the complete analysis task. For questions requiring multi-window correlation or multi-step reasoning, deep thinking mode provides extended reasoning. If you are not sure where to start, the system suggests relevant questions based on your asset structure and current data; voice input is also supported.

Zero-Query Intelligence and Chat BI are two sides of the same system: one delivers findings before you ask, the other lets you drive the entire platform through natural language.

Panels That Explain Themselves

Numbers show you what happened. Understanding what it means often requires experience. Open any panel — AI-generated or manually created — and click Panel Insights. The AI data insights feature generates a natural-language narrative for that specific chart: whether the overall value range is normal, which peaks or troughs appeared, how current readings compare against historical averages, and anything that might warrant attention. This narrative is not a fixed template — it is generated fresh from the current data window each time you click, reflecting the latest state of your data.

This turns dashboards from something you have to interpret yourself into something that tells you what to pay attention to.

Root Cause Analysis: The Investigation Starts Before You Do

A compressor triggers a high-temperature alert. The traditional response flow: an operator opens the alert, pulls historical data, scrolls through attributes one by one, applies experience to form a hypothesis, decides on a course of action. Fast, this takes half an hour. Slow, it takes hours — while the equipment may still be running.

Root cause analysis automates this workflow. Clicking the Root Cause Analysis button on an event detail page launches a multi-step investigation: retrieving the full time-series data surrounding the event, running statistical exploration across attributes, searching relevant technical documentation and known failure patterns for this equipment type, generating a ranked list of root cause hypotheses, validating each hypothesis against the actual data, and producing a structured report — timeline, data findings, root cause hypotheses with supporting evidence, and recommended actions. By the time the operations engineer opens the report, most of the investigative work is already done. The remaining task is judgment, decision, and action — not data archaeology.

TDgpt: Time-Series AI Running Inside the Database Core

Large language models are well suited to semantic understanding, content generation, and multi-step reasoning. Running computation-intensive time-series analysis — anomaly detection, trend forecasting, missing data imputation — requires a different class of engine. TDgpt is TDengine’s built-in time-series AI engine, executing these tasks directly inside the database core through SQL functions such as ANOMALY_WINDOW() and FORECAST(). It operates independently of any large language model connection and is fully available in air-gapped, on-premises deployments.

Anomaly detection requires no predefined thresholds. TDgpt learns the normal behavior pattern of each monitored attribute and continuously flags deviations from that pattern, even when values remain within nominal ranges. A subtle periodic fluctuation in injection barrel temperature, a chiller COP drifting outside its historical envelope — both are detected and surfaced as events before any hard limit is crossed. The forecasting engine estimates future attribute values from historical behavior: when will a storage tank reach its upper limit, will compressor discharge temperature breach its threshold in the next 24 hours, how will influent flow at a wastewater plant shift across a holiday period. These become data-driven answers. When sensors go offline or network interruptions leave gaps in the data, missing data imputation uses learned signal behavior to estimate values across the gap, ensuring downstream KPI calculations and analyses are not skewed by missing readings.

TDgpt supports a complete algorithm library spanning classical statistical models, machine learning, and deep learning, as well as TDtsfm — TDengine’s pretrained time-series foundation model that supports zero-shot inference when historical data is limited, requiring no training to deploy.

Frequently Asked Questions

  1. How is Zero Query Intelligence different from Chat BI?

    Chat BI requires users to ask questions — and the quality of the output depends on the quality of the questions. Zero Query Intelligence inverts this: the system reads the data, senses the operational context, and delivers dashboards, analysis recommendations, and KPI suggestions before any question is asked. It works regardless of the user’s domain expertise or familiarity with data tools.

  2. What is the relationship between TDgpt and the connected large language model?

    TDgpt is TDengine’s built-in time-series AI engine, executing computation-intensive analytical tasks — anomaly detection, forecasting, missing data imputation — directly inside the database. The large language model handles natural language understanding, content generation, panel and analysis creation, narrative interpretation, and root cause reasoning. The two engines are independent: all TDgpt capabilities remain fully available with no large language model connection configured.

  3. Does root cause analysis require additional setup?

    No. Root cause analysis uses the deep thinking model provided by LLM — no additional setup is required. Click the root cause analysis button on any event detail page to launch the investigation. The system completes data retrieval, hypothesis generation, and validation automatically, typically producing a structured report within a few minutes.

  4. Are AI-generated panels and metrics accurate enough for production use?

    AI-generated outputs are a high-quality starting point, not a final answer. Panels and analysis configurations can be reviewed and modified before saving. Composite metrics include a download/upload workflow for human review and correction. AI completes the bulk of the configuration work; the final judgment and sign-off remain with the engineer.

  5. Do these AI capabilities require high-quality data to work well?

    Data quality has a direct impact. Elements with descriptive attribute metadata, configured physical units, and defined operating limits produce more accurate AI recommendations. Attributes with arbitrary field names and missing context produce weaker results. This is why TDengine IDMP emphasizes data standardization and contextualization — AI-Ready data is the foundation on which all intelligent capabilities depend.

  6. Are AI features available in air-gapped, on-premises deployments?

    All TDgpt capabilities — anomaly detection, forecasting, and missing data imputation — run inside TDengine with no external dependencies, and are fully available in offline or air-gapped environments. Zero Query Intelligence, Chat BI, and root cause analysis depend on a large language model and require a reachable LLM endpoint, which can be a locally deployed private model.