Historians: Net-Consumer
This article provides a starter solution related to obtaining real-time data, applying a mapping, and writing values to a historian.
What Does This Article Cover?
HighByte Intelligence Hub can be used to create a solution that obtains real-time values and writes the values to a historian software. For this solution it is necessary to link the source data to the destination. HighByte's refers to this scenario as "historian net-consumer". This article summarizes a starter solution that obtains real-time data from KEPServerEX OPC UA Server and writes the data to AVEVA PI System Data Archive Historian. The solution maps OPC UA tag paths to PI Point Names.

Solution Assumptions
The following summarizes the assumptions related to the example solution.
- The scope of the starter solution included dozens of OPC UA tags and PI Points.
- The frequency of values changes varies per OPC UA tag. Last values were obtained on change via and OPC UA subscription and there were hundreds of values changes per minute for the in-scope OPC UA tags.
- The values were obtained from KEPServerEX and stored in AVEVA PI System Data Archive.
- AVEVA PI System was installed on an Amazon EC2 instance. The Intelligence Hub PI Connection agent was installed on the same EC2 instance.
- Kepware KEPServerEX was installed on a second Amazon EC2 instance.
- Intelligence Hub was installed on a third Amazon EC2. The EC2 had 8GB of RAM and the heap memory allocated to the Java Virtual Machine (JVM) was not adjusted.
- The Intelligence Hub Pipeline wrote data to PI Points.
- The solution requires unique OPC UA tag paths to be related to PI Point Names. These association was made in a Microsoft SQL Server database table.
Solution Summary
The following summarizes the design of the example solution.
- The in-scope OPC UA Tag paths and PI Point Names are stored in a Microsoft SQL Server database table.
- The first step in creating the solution is to obtain the values for the in-scope OPC UA tags. The OPC UA tag paths might be obtained from a file, a database, or the OPC UA Server. In this case the OPC UA tag paths were obtained from a database table using a SQL Server Connection Input. The query returned the list of in-scope OPC UA tag paths. Cache can be enabled on the Connection Input.
- Next the Connection Input to obtain the OPC UA tag values can be configured. The Intelligence OPC UA Connection Tag Type Input may be used. The Connection Input that obtains the OPC UA tag paths can be used for the Reference of a Dynamic Template. Optimally the Connection Input should obtain data in sub-seconds. Start with a small number of OPC UA tag paths and increase to optimize. The include metadata option needs to be enabled because the OPC UA tag path will be used to look up the respective PI Point Name and the OPC UA timestamp will be written to PI Data Archive.
- The Intelligence Hub Pipeline design consists of and event trigger to obtain the OPC UA values with metadata, a read stage to obtain the PI Point name associated to the respective OPC UA tag path, JavaScript to transform the payload, creating an array to enable efficient writes to PI Data Archive, and the write stage to write to PI Data Archive. The payload transformation is a bit complex because the OPC UA value needs to be assigned to the PI Point name.
- The solution needs to look up the PI Point Name for the given OPC UA path. It might not be prudent to query the source Microsoft SQL Server database for each value change. For this reason, the solution replicates the contents of the Microsoft SQL Server database to an in-memory SQLite table. The Pipeline queries the SQLite table to obtain the PI Point Name for the OPC UA tag path.
- When optimizing the Pipeline consider the volume of data being processed. It might not be possible to use the Replay capabilities due to the volume of data being processed.
- A project file may be downloaded [here]. The included project file is the same one used in the video below. You can download it and import it into a runtime version 4.3 or later to explore and experiment.

Results and Recommendations
The following summarizes the results of running the example solution.
- The Pipeline was configured with an Event Trigger. On average the Pipeline executed in a few milliseconds.
- The Pipeline processed about 2,500 value changes per minute.
- The example solution did not consider error handling in the Pipeline for missed reads of PI Point Names, data type mismatches, or an unexpected flood of OPC UA data for example.
- The Pipeline should be optimized based on Queue count. The Queue count should stabilize and not continue to increase.
Additional Resources
- Intelligence Hub PI Connection
- Intelligence Hub OPC UA Connection
- Intelligence Hub SQLite Connection
- Intelligence Hub Microsoft SQL Server Connection
- Intelligence Hub Pipelines