Skip to content
  • There are no suggestions because the search field is empty.

Reference Solution: AVEVA PI System Historical Values to Narrow Table Format

This article captures the design considerations for creating a solution to query PI Data Archive to obtain historical PI Point values and create a data payload for a narrow table format​

What Does This Article Cover?

An Intelligence Hub solution can be created to obtain historical PI point values from PI Data Archive.  The solution can be configured to shape the data payload so that the values for all PI Points are written to the same values column.  The following describes how this solution can be created.

Design Assumptions

The following are some of the design considerations.

  • This design approach should be used to obtain historical PI Point data for a narrow table in a data lake or data warehouse. ​
  • This design approach uses the Intelligence Hub PI Connection Point Input that can be configured to use an index window.​
  • This design approach does not obtain the historical values for asset attribute that are not associated with PI Points.  These values can be obtained using Intelligence Hub using an Asset Type PI Connection Input.   

 

Overall Solution Configuration​

The following describes the Intelligence Hub configuration objects.​

  • Obtain data from PI Data Archive using the PI Connection Point Input.  Typically, an index window will be configured for the respective date range.
  • Model the desired output schema using an Intelligence Hub Model.
  • Create a Pipeline with Flow stage configured with the desired polling frequency.
  • Configure the Connection Input as the Reference of the Flow stage. ​
  • Configure the Connection Output for the respective data store. ​​


Pipeline Configuration ​

The following describes the Intelligence Hub Pipeline design​.

  • Breakup the incoming object.
  • Breakup the resulting array(s).
  • Determine of the value is a number or not.
  • Map the payload to the predefined schema using the Intelligence Hub Model.
  • Buffer transactions using a strategy that aligns with the solution requirements for latency.  It may be beneficial to buffer transactions for a cloud data warehouse, like Snowflake for example.
  • Consider the use of the Buffer Key based on use case​.
  • Create the desired file format ​for the data lake if applicable.
  • Write to the data lake configuring the desired file name format.

 

Other Considerations 

The following should be considered related to this design ​

  • The PI Connection Points Input query performs well and is capable of handling thousands of PI Points depending on the frequency of data change.​
  • The Reference used in the Connection Input is a list of in scope PI Point Names.  This may obtained using a Connection Input, Condition, or multiple Pipeline stages.​
  • The reference solution simply considers whether the value is a number or not.  Alternatively, the attribute's data type could be obtained from the Intelligence Hub PI Connection Asset Metadata Connection.

 

Solution Video

The following video captures a reference solution.

 

Additional Resources