Data Tells You What Changed, Not What Happened
Industrial systems generate continuous streams of time-series data. Every second, sensors record temperature, pressure, flow, vibration, and countless other signals. For decades, this has been the foundation of industrial data systems: collect signals, store them efficiently, and visualize how they change over time.
This approach works, but it has an inherent limitation.
Time-series data tells us what changed, but it does not tell us what actually happened.
A trend may show a pressure spike, but it does not explain whether that spike occurred during startup, steady operation, or an abnormal condition. A drop in temperature may be visible, but without context, it is unclear whether it is expected behavior or an early sign of failure. Engineers looking at the same data may reach different conclusions depending on their experience and familiarity with the system.
This is the gap between data and understanding.
Engineers do not think in terms of isolated signals or even continuous trends. They think in terms of events—startups, shutdowns, batches, transitions, and anomalies. These are the units of operation. These are the moments that matter.
From Signals to Events: What Historians Got Right—and What Modern Platforms Missed
The gap between signals and understanding is not new, and it is not because the industry failed to recognize it. In fact, one of the most important advances in industrial data systems was precisely an attempt to bridge this gap.
PI System introduced the concept of Event Frames, which fundamentally changed how engineers could work with time-series data. Instead of treating data as an endless stream of values, Event Frames allowed engineers to define meaningful periods—startups, shutdowns, batches, and abnormal conditions—and associate them with assets, attributes, and context.
This was a major step forward. It acknowledged that industrial operations are not just continuous signals, but sequences of meaningful events. It aligned the data model with how engineers actually think about systems. For many users, Event Frames became one of the most valuable capabilities in PI, because it enabled them to move from observing data to understanding operations.
In this sense, data historians—especially PI System—did not miss the problem. They addressed it with a strong conceptual model.
However, what is interesting is what happened next.
As the industry moved toward modern data infrastructure, platforms like Snowflake and Databricks introduced massive improvements in scalability, storage, and computational power. They made it possible to process large volumes of data, run complex analytics, and integrate with modern data ecosystems.
But in this transition, something important was lost.
These platforms are designed primarily around tables, files, and generic data processing paradigms. They do not provide a native concept equivalent to Event Frames. There is no built-in way to define “a startup,” “a batch,” or “an abnormal operation” as first-class entities in the data model. Engineers are left to reconstruct these concepts through SQL, pipelines, or custom applications.
This creates a fundamental disconnect.
The infrastructure becomes more powerful, but the data becomes less meaningful from an operational perspective. While it is easier to store and process data, it becomes harder to answer the questions that matter most to engineers: What happened? When did it start? How does it compare to similar situations?
Even with scalable and powerful infrastructure, true event-centric analysis remains difficult—not because of compute limitations, but because the operational model is missing.
The Gap Between Event Modeling and Event Analysis
While Event Frames represent a powerful concept, the way they are implemented also reveals certain limitations.
In PI System, Event Frames are defined and managed within the Asset Framework, providing a structured way to associate events with assets and time ranges. This creates a strong foundation for modeling operational behavior, and it enables engineers to identify meaningful periods such as batches, startups, or abnormal conditions.
However, when it comes to analyzing those events, the workflow becomes more fragmented. Event-based analysis is typically performed in tools like PI Vision, where engineers visualize data within selected time windows. While this works for basic investigation, it is not designed for deeper event-centric analytics.
In practice, many important industrial use cases are inherently event-driven. For example, in batch processes, engineers often need to compare multiple batches to understand what “good” looks like. They align batches based on start time, generate a golden profile, and analyze how a specific batch deviates from that baseline. This type of analysis is essential for improving quality, identifying root causes, and optimizing operations.
While Event Frames make it possible to define and organize these batches, the actual analysis—alignment, normalization, statistical comparison, and pattern identification—is not deeply supported within the core system. As a result, many organizations turn to specialized tools like Seeq or TrendMiner, which are designed specifically for this type of event-based analysis.
This highlights an architectural gap.
Event modeling exists, but event analysis is not deeply integrated. Events are defined within the system, but they are not treated as the central unit of analysis. Extracting insights from events often requires additional tools, data movement, and duplicated logic across systems.
If events are the natural unit of operation, they should also be the natural unit of analysis.
From Asset Structure to Operational Behavior
In the previous discussion on asset-centric modeling, we focused on structure—how data is organized around equipment and systems. That provides the foundation for understanding what exists.
Event-centric modeling builds on top of that by introducing behavior.
If assets describe the system, events describe how the system operates over time. They capture transitions, sequences, and conditions that cannot be expressed in a static model. A pump startup, a batch run, or an abnormal condition is not just a set of values, but a meaningful segment of operation.
Together, these two perspectives form a more complete representation of industrial systems.
Asset-centric modeling defines the structure.
Event-centric modeling defines the behavior.
Without events, data remains a continuous stream that must be interpreted manually. With events, the system begins to express operational intent.
Why Events Become Even More Important in the AI Era
The importance of event-centric modeling becomes even more pronounced in the context of AI.
AI systems are often applied directly to time-series data, but this approach has inherent limitations. Raw signals lack clear boundaries and context. Patterns are difficult to interpret without understanding the operational state in which they occur.
Events provide a natural structure for AI.
They segment data into meaningful units, making it possible to compare similar operations, align time-series data based on event boundaries, and analyze variations between normal and abnormal behavior. More importantly, they provide the context that AI systems need to interpret data correctly.
An AI model analyzing vibration data without knowing whether the system is starting up, shutting down, or operating steadily will produce unreliable results. The same model, when applied within well-defined event frames, can generate far more accurate and actionable insights.
Without events, AI sees data.
With events, AI understands behavior.
Toward an Event-Centric Industrial Data Foundation
To fully realize the value of industrial data, events need to become a native part of the data foundation, not an optional layer.
This means that event modeling and event analysis should not be separated across different systems and tools. Events should be defined, stored, and analyzed within a unified architecture, alongside time-series data and asset models.
What PI System introduced as an important concept now needs to be elevated into a foundational capability.
In the AI era, event-centric modeling is no longer just a feature for analysis—it is a requirement for building intelligent systems.
Signals describe variation,
Assets describe structure.
Events describe behavior.
Together, they form the foundation for understanding industrial operations—and for enabling the next generation of AI-driven systems.


