24,000 Contact Us Cloud

Seeing Throughput, Blend Quality, and Margin Together in Refinery Operations

Jim Fan

March 19, 2026 /

A practical refinery operations example showing how process-unit performance, finished-product quality, and real-time economic indicators can be monitored in one operationally meaningful system.

Executive Summary

Refinery performance rarely breaks down in one obvious place. More often, profitability erodes through a series of smaller operational deviations: a major unit runs below target feed rate, a blend drifts closer to its specification limit, or a process constraint reduces flexibility across the site. Each signal may be visible somewhere, but they are often not visible together.

That separation creates a real operating problem. Control room teams usually see process conditions in real time. Quality teams track specification-sensitive product properties. Planning and economics teams monitor refinery margin and production targets. When those views live in separate systems, it becomes harder to understand cause and effect across the refinery.

This application is designed to address that gap. It brings unit throughput, blend quality, and site-level economic indicators into one connected operating view so that operators, engineers, and planners can identify deviations earlier, understand their downstream impact faster, and respond more consistently.

Why This Challenge Matters in Refining

A modern refinery is an interconnected system. The vacuum distillation unit, fluid catalytic cracker, hydrotreater, hydrocracker, alkylation unit, sulfur recovery unit, and blending operations do not operate in isolation. A change in feed, yield, sulfur removal performance, or blend composition can ripple through product quality and site economics.

That is why refining teams constantly balance multiple objectives at once:

  • keep major units running near plan
  • maintain product quality within specification
  • avoid operational constraints and instability
  • protect or improve refinery margin

The difficulty is not a lack of data. Most refineries already generate large amounts of process information. The real challenge is connecting that information in a way that helps people make better decisions in the moment.

A unit may still be online, but running below target throughput. A gasoline blend may still meet specification, but only because operators are giving away valuable blend components. A site-level margin indicator may be weaker than expected, but the operational source of that weakness may not be immediately obvious. If these signals remain disconnected, teams end up reacting later than they should.

Historical trend and analysis view comparing unit throughput, blend quality movement, and margin behavior over time

The Operational Problem This Application Solves

This application focuses on a practical refinery visibility problem: operations teams need a shared view of unit performance, blend quality, and economic impact, but those signals are often fragmented across different systems and time horizons.

That fragmentation shows up in several ways.

Operators may see that a unit is below target, but not know how much margin is being left on the table. Blending teams may see a product drifting closer to an off-spec condition, but not know whether the issue is tied to upstream unit behavior or local blend decisions. Planning teams may see weaker economics, but not have an immediate line of sight into which unit, product, or operating change is driving the result.

This application brings those layers together into one operating model. Instead of treating process performance, product quality, and economic signals as separate conversations, it makes them visible as part of the same refinery picture.

What the Application Models

The scenario is modeled around a large United States Gulf Coast fuels refinery. It uses a simplified but realistic structure that reflects how refinery teams think about the site operationally.

At the top level, the refinery is represented as BAYTOWN. Under that, the site is organized into three main areas:

  • Units, which represent the major process units
  • Blends, which represent finished product blending operations
  • Financials, which represent site-level economic indicators

Example asset paths include:

  • BAYTOWN.units.VDU-201
  • BAYTOWN.units.FCC-310
  • BAYTOWN.units.HDS-145
  • BAYTOWN.blends.Regular_Gasoline
  • BAYTOWN.financials

The modeled process units include a vacuum distillation unit, a fluid catalytic cracker, a diesel hydrotreater, an alkylation unit, a sulfur recovery unit, a hydrocracker, a delayed coker, and a naphtha hydrotreater. These units generate the types of operational telemetry refinery teams typically watch every day: feed rate, target feed rate, unit status, temperatures, pressures, sulfur measurements, and estimated economic impact.

The blending layer includes products such as regular gasoline, premium gasoline, ultra-low sulfur diesel, and jet fuel. Each blend carries its own quality values, specification targets, operating status, and estimated financial impact.

The financial layer includes indicators such as previous day margin per barrel, total previous day margin, real-time estimated margin, and estimated daily margin rate.

Model Tree for Oil Refinery

How the Application Works

At the foundation of the application is a continuous flow of time-series operational data. Unit telemetry, blend measurements, and site-level financial indicators are collected and organized in a way that preserves both history and operational context.

On top of that data layer, the application calculates key performance indicators, compares actual values to targets, and applies event logic to identify meaningful operating conditions. That makes it possible to move beyond isolated tag trends and toward a more interpretable view of refinery behavior.

The application supports four core capabilities.

First, it provides real-time monitoring across units, blends, and financial indicators. Users can see which major units are meeting plan, which product properties are moving toward risk, and whether current operating conditions are helping or weakening site economics.

Second, it supports historical analysis. Teams can trend variables over time, compare actual performance to target conditions, and investigate when deviations begin to affect the refinery more broadly.

Third, it supports event-driven visibility. Important operating conditions such as margin degradation, margin opportunity, or severe deterioration can be defined and surfaced as meaningful events rather than left buried in raw signals.

Fourth, it supports an asset-centered operating model. Data is not treated as disconnected streams alone; it is tied to units, products, and financial objects so that performance can be understood in context.

Operations Overview for the Refinery

Example Key Performance Indicators and Events

One of the most important indicators in this application is real-time estimated margin per barrel. This is a high-level measure of how profitable the refinery appears to be at a given moment based on current operating conditions. It reflects the combined effect of throughput, yields, quality performance, and constraints.

Another important indicator is unit feed rate versus target feed rate. This is simple but powerful. It tells users where the refinery is running below plan, where it may be pushing harder than expected, and where margin opportunities or constraints may be developing.

At the blend level, one of the most useful indicators is measured quality versus target specification. This helps show whether the product is comfortably in spec, drifting toward risk, or being over-controlled in a way that gives away value.

Overview dashboard with Feed Rate and Estimated Margin

Event logic adds another layer of clarity.

A Margin Degradation event can be triggered when real-time estimated margin per barrel falls below the previous day margin benchmark. This signals that the refinery is currently underperforming relative to a recent baseline.

A Margin Opportunity event can be triggered when real-time estimated margin per barrel rises above the previous day benchmark by a defined threshold. This helps teams see when operations are performing above expectation and may justify different operating decisions.

A Margin Collapse event can be triggered when margin falls below a defined floor, such as eight dollars per barrel. This type of event points to a potentially serious issue such as a major unit outage, feed constraint, or product quality problem.

Margin Collapse Events Notification and Analysis

Why This View Is Operationally Useful

The main value of this application is not that it creates another dashboard. Its value is that it connects operating layers that are often reviewed separately.

For operators, it gives a clearer view of whether unit conditions are aligned with plan and whether those deviations matter beyond the local unit.

For process engineers, it makes it easier to tie quality and yield behavior back to upstream unit performance.

For planners and refinery economics teams, it shortens the path from economic change to operational explanation.

For site leadership, it creates a more direct line of sight between current operating conditions and refinery-wide impact.

This leads to several practical improvements:

  • faster detection of throughput gaps
  • earlier recognition of developing blend risk
  • better understanding of how process changes affect profitability
  • more consistent response across teams
  • stronger shared context between operations and planning

Implementation Path

A practical rollout usually starts with one refinery application rather than an attempt to model everything at once.

The first phase is to stand up the core unit, blend, and financial model with the most important telemetry and indicators. The goal at this stage is to make the key operating picture visible.

The second phase expands the model with deeper process tags, richer event logic, and more complete dashboards for different roles.

The third phase focuses on optimization: refining thresholds, improving event quality, and using the historical record to support more advanced analysis.

The fourth phase scales the model across additional use cases, other sites, or more advanced workflows such as anomaly detection, forecasting, or operating guidance.

Implementation Foundation

This refinery application is built on an industrial data foundation that can continuously ingest time-series telemetry, preserve historical context, support key performance indicator calculations, organize data by asset, and surface meaningful events across the operating model.

In this implementation, TDengine provides that foundation. It supports the historical data layer, the asset-context structure, the performance calculations, the event logic, and the dashboards used throughout the application. Because the underlying model, logic, and views can be packaged and reused, the same setup can also be extended or adapted to other refinery environments.

Conclusion

Refinery performance does not live in one screen. Throughput, blend quality, and site economics move together, and operations teams need a way to see them together if they want to respond earlier and operate more consistently.

This application shows one practical way to do that. By connecting major unit performance, finished-product quality, and real-time economic indicators in a shared operational model, refinery teams can move from isolated signals to a more useful picture of what the site is actually doing and where performance is beginning to drift.

A strong starting point is to focus on one application with clear operational and economic value. In many refineries, that means beginning with the relationship between unit throughput, blend quality, and margin impact, then expanding the model from there.

  • Jim Fan
    Jim Fan

    Jim Fan is the VP of Product at TDengine. With a Master's Degree in Engineering from the University of Michigan and over 15 years of experience in manufacturing and Industrial IoT spaces, he brings expertise in digital transformation, smart manufacturing, autonomous driving, and renewable energy to drive TDengine's solution strategy. Prior to joining TDengine, he worked as the Director of Product Marketing for PTC's IoT Division and Hexagon's Smart Manufacturing Division. He is currently based in California, USA.