Data Alone Does Not Create Insight
For decades, industrial data historians have played a critical role in collecting and storing time-series data. Systems like PI have been highly effective at capturing signals from the plant floor and making them available for visualization and basic analysis.
This capability is essential, but it is no longer sufficient.
In modern industrial environments, the expectation has shifted. It is not enough to store and display data. Organizations increasingly expect systems to generate insights—detect anomalies, predict future behavior, identify patterns, explain deviations and analyze the root cause.
In other words, the goal is no longer just to see data, but to understand it.
The Limits of Historian-Centric Analytics—and Why It Moved Outside
Data historians were not designed as full-fledged analytics platforms, and this becomes increasingly evident as industrial use cases evolve.
While many historians provide built-in calculation engines and rule-based processing, these capabilities are typically limited in scope. They are well suited for predefined logic, but not for exploratory, model-driven, or AI-based analytics.
As industrial systems become more complex, engineers need to go beyond simple calculations. They need to detect subtle anomalies, forecast system behavior, impute missing data, analyze correlations across large numbers of variables, and build regression or clustering models. These types of analysis require flexibility, iteration, and access to a broader ecosystem of algorithms.
This is where the limitations of historian-centric analytics become clear.
As a result, advanced analytics has gradually moved outside the historian.
Organizations increasingly rely on specialized tools like Seeq or TrendMiner to perform event-based analysis, batch comparison, golden profile generation, and model-driven exploration. These tools extend the historian by accessing data through connectors or queries rather than duplicating it, allowing organizations to leverage existing infrastructure.
But even without data duplication, the separation remains.
Analytics is no longer part of the core data system. It exists in a different layer, with its own execution environment, logic, and workflows. Engineers define events in one system, analyze them in another, and manage models in yet another context.
This creates fragmentation.
Logic is distributed across systems. Workflows become harder to manage and standardize. And while the data may remain in place, the intelligence built on top of it becomes more difficult to reuse and scale.
The issue is not where the data lives, and the issue is where intelligence lives.
From Python Flexibility to SQL Simplicity
In the broader data and AI ecosystem, Python has become the dominant language for analytics and modeling.
It provides access to a rich ecosystem of libraries for statistics, machine learning, and time-series analysis. As a result, platforms like Seeq and TrendMiner introduced Python support, allowing users to bring their own algorithms and extend analytics beyond built-in capabilities.
This is an important step forward.
More importantly, Python support is essential because AI is evolving at an unprecedented pace, with new models and algorithms emerging continuously. No single vendor can keep up with this speed. By supporting Python, the system remains open—allowing organizations to integrate the latest technologies without being locked into a fixed set of capabilities.
However, flexibility alone is not enough.
When analytics is implemented through Python scripts, it often remains separate from the core system. Scripts need to be written, deployed, managed, and integrated into workflows. Analytics becomes powerful, but not seamless. It is accessible mainly to advanced users, while most engineers still rely on predefined tools and interfaces.
This creates a gap between capability and usability. To bridge this gap, analytics needs to be both flexible and simple. One effective approach is to expose analytics capabilities as SQL functions.
In this model, advanced analytics—such as anomaly detection, forecasting, and data imputation—can be invoked directly within queries. Engineers do not need to manage scripts or pipelines. They can use analytics in the same way they access data.
Behind the scenes, the system can still leverage Python, machine learning models, or other advanced algorithms. For example, TDengine Historian uses TDgpt to orchestrate different models—ranging from statistical methods to LLMs and time-series foundation models—while exposing the results through simple SQL functions.
This fundamentally changes how analytics is adopted.
Python provides the openness needed to keep up with AI innovation. SQL provides the simplicity needed to scale across users and use cases. Only when both are combined can advanced analytics become truly usable in industrial systems.
ANOMALY_WINDOW()From Native Analytics to an AI-Native Data Foundation
To fully unlock the value of industrial data, analytics needs to become part of the data foundation itself.
This means analytics should not be something that runs outside the system, but something that is integrated into how data is stored, queried, and used. Instead of exporting data to external tools, the system itself becomes the place where analysis happens.
Modern industrial data platforms are starting to move in this direction.
For example, platforms like TDengine integrate real-time stream processing, event generation, and analytics directly into the core system. Data can be processed, analyzed, and enriched as it is ingested, enabling true real-time analytics workflows without relying on external pipelines.
At the same time, analytics is no longer limited to predefined rules. With built-in AI capabilities and contextualized data models, systems can automatically detect anomalies, generate insights, and trigger events based on streaming data.
This represents a fundamental shift. Analytics is no longer a separate layer. It becomes part of the data foundation itself.
When analytics is native, it can be applied consistently across real-time data streams, historical data, event-based workflows, and asset-centric models. Insights are no longer generated only when requested—they can be continuously produced as part of the system’s operation.
This is what defines an AI-native analytics model. Not a system where AI is added on top, but a system where analytics and intelligence are built into the foundation.
Closing Thought
Industrial systems have evolved from collecting data to analyzing it. But in many environments, analytics is still treated as something external.
To move forward, analytics must become native to the data foundation. Not an add-on, not a separate tool, but a core capability.
At the same time, native analytics does not mean a closed system. A modern industrial data foundation must remain open—allowing integration with external tools, custom models, and evolving technologies.
Openness ensures evolution, and native capabilities ensure usability. Only then can industrial data systems deliver the level of insight required in the AI era.


