Comprehensive Monitoring and Predictive Maintenance System for Green Energy Operations

Avatar
SmartOPS (Shanghai Electric)
/
Share on LinkedIn

An energy storage system can avail of the entire end-to-end value chain of technologies from IoT to Big Data and Machine Learning and other technologies which can provide information collection, cloud access, data storage and data analysis. For customers, a smart energy storage management and control platform can provide comprehensive monitoring, predictive maintenance and thermal management analysis and help them achieve the most efficient use of their energy storage equipment.

Application Background

The SmartOPS system supports both cloud deployment and local deployment. Cloud deployment is based on the current unified architecture of Shanghai Electric Group which uses a cloud-based time series database and scalable and flexible configuration of various resources. However, in local deployment, it is necessary to focus on the limitations of local hardware resources, such as the memory, CPU, and read and write performance of the power station systems. The current hardware configuration of the power station system is shown below.

CPU: Intel Atom N2600 1.6 GHz

Memory: 2GB DDR3 SDRAM

Display: DB15 VGA Interface

Storage: one Type I/II Compact Flash, two SATA DISK Interface

Therefore, we needed to consider a time series database (TSDB) suitable for deployment in the limited resources of the hardware deployed at power stations.

Technical Selection

The overall selection requirements involved many dimensions:

  • Performance: Read and write performance
  • Storage: Compression rate
  • Features: Scalability, high-availability, ease-of-use, security, 
  • TCO: Maintainability, administration, training, service and support.
  • Vendor: Industry recognized, innovator, longevity

Our team focused on evaluating the following databases:

  • OpenTSDB: With HBase as the underlying storage, it encapsulates its own logic layer and external interface layer. This architecture can make full use of the features of HBase to achieve high data availability and good write performance. However, compared with a native time series database, the OpenTSDB data stack is more complex, and there is still room for further optimization in terms of read and write performance and data compression.
  • InfluxDB: Currently it is the most popular time series database. Data is stored in columns, which can efficiently process, store, and query time series data, and provides a feature-rich Web platform that can visualize and analyze data.
  • Apache IoTDB: A distributed time series database specially designed for the Internet of Things. Data is stored in columns, with excellent write performance and rich data analysis functions, and can effectively handle out-of-order data.
  • TDengine: A distributed time series database specially designed and optimized for IoT and Big Data. Data storage is optimized for memory and disk with extremely high write performance and rich SQL-based, non-proprietary query functions. It also provides full-stack functionality for IoT and Big Data such as caching, stream processing, and message queues, native clustering. The installation package is less than 10MB and the performance improvement of 10 times is indeed very attractive.
  • ClickHouse: A powerful OLAP database, data is stored in columns, with extremely high data compression ratio, high write throughput and high query performance. It provides a wealth of data processing functions to facilitate various data analysis.

Based on the fact that localized deployment on the site required lightweight resource consumption, we first excluded OpenTSDB and Apache IoTDB. OpenTSDB is based on HBase and is relatively heavy, while Apache IoTDB is not friendly to edge lightweight devices in terms of resource consumption. While ClickHouse is fast for a single table, it is weak in other aspects, including joins, management and operation and maintenance, which are more complex.

The R&D team finally settled on testing InfluxDB and TDengine.

Preliminary Testing

Let’s first look at the tests for InfluxDB.

The installation package size of InfluxDB is 60.2M, and the resource usage after running is shown in the following figure:

At the test power station site, the number of records written per minute is less than 3000 and the resource consumption is shown in the figure below. The CPU consumption of InfluxDB increases but the memory resources do not change much.

A query is executed as follows:

The corresponding resource consumption for the above query is shown in the following figure:

Eventually the query fails without providing any results and so for the current resources, the resource consumption of InfluxDB is too high. If the local application service (SmartOPS) is enabled at the same time, the needed functionality simply cannot be provided.

We then tested TDengine.

We are using TDengine version 2.1.6.0. The installation package TDengine-server-2.1.6.0-beta-Linux-x64.rpm is only 9.42MB. The backend developers deployed it to the test nodes.

The resource consumption when just deployed is as follows:

We then tested the same query as above, under the same conditions and while TDengine has a small increase in the CPU for a short time, the query returns results in milliseconds.

From the above preliminary tests, it is clear that in the case of deployment on low server resources, compared with InfluxDB, TDengine has obvious advantages. For power station systems with very limited resources, TDengine can cope much better and provides a cost-effective solution.

Since TDengine is purpose-built for IoT applications it capitalizes on the fact that IoT applications have:

  • few to no data updates or deletions
  • no transaction processing is required like traditional databases
  • there is more writing and less reading than Internet applications

TDengine has novel concepts such as “One Table for One Data Collection Point” and supertable which simplify the data storage structure and significantly improve the efficiency of aggregation queries. These are very important for the requirements of our energy storage scenarios with limited power station resources.

TDengine Architecture

The data format returned by the power station equipment is basically fixed, with one timestamp and one value. So we built a supertable for one station. The subtable is based on the information of the point, and one subtable per point is used to distinguish the information collected by different devices. Combined with business requirements, the number of tags is set to 5 – point identification, station id, substation id, unit id and equipment id.

The supertable creation statement is:

create table ops (ts TIMESTAMP, value FLOAT) TAGS (name NCHAR(10),sid NCHAR(20),sub NCHAR(10),unit NCHAR(10),dev NCHAR(20))

The subtable creation statement is:

CREATE TABLE escngxsh02_g01_e03h01_c1 USING ops TAGS ("C1","ESCNGXSH02","G01","E03","E03H01")

To give us an idea of the difference in performance between InfluxDB and TDengine – we ran the equivalent of the following query on InfluxDB after running and acquiring a week’s worth of data.

select * from ops WHERE ts>1629450000000 and ts<1629463600000 limit 2;

To execute the query on InfluxDB, the memory usage rate reached 80% no results were returned even after ten minutes!

On the other hand, after using TDengine for nearly a month we ran the same query and it only takes 0.2 seconds. The performance is excellent.

TDengine in Practice

The technical teams at Shanghai Electric have adopted TDengine as the core time series database for the SCU (Station Control Unit) architecture. This architecture provides comprehensive information, local operational control and coordinated protection functions for the energy storage system. Data analysis and operation optimization provide the highest level of safety for a power station and TDengine’s high-performance writing and aggregation query functions provide millisecond response to monitor power stations effectively.

In terms of storage, TDengine also excels at data compression. When we used InfluxDB, the data volume in one day was more than 200 MB. After switching to TDengine, the data volume in one day, for the same number of collection points, was less than 70 MB, which is 1/3 of the requirements of InfluxDB.

In future projects, we plan to use TDengine’s native distributed cluster capability to fully digitize the operation of power stations. By combining analysis algorithms, prediction algorithms, and data mining algorithms we aim to provide stability, efficiency and loss analysis, thermal management as well as diagnostic capabilities such as failure prediction and pinpointing performance issues.