Skip to content
  • There are no suggestions because the search field is empty.

Reference Solution: AVEVA PI System PI Point Changes to Narrow Table Format

This article captures the design considerations for creating a solution to subscribe to PI Data Archive to obtain PI Point value changes and create a data payload for a narrow table format​

What Does This Article Cover?

An Intelligence Hub solution can be created to subscribe to AVEVA PI System to obtain value changes for PI Points.  The solution can be configured to shape the data payload so that the values for all PI Points are written to the same values column.  The following describes how this solution can be created.

Design Assumptions

The following are some of the design considerations.

  • This design approach should be used when it is necessary to obtain all value changes including obtaining late arriving data.  ​
  • This approach uses the Intelligence Hub PI Connection Point Changes Input that returns all adds, updates, inserts, and deletes to all PI Points identified in the query.​
  • The Connection Input creates a subscription via PI Data Pipes.  The subscription remains active for ten minutes and the subscription is reactivated each time it is queried.​
  • The Connection uses the query as defined at the time of subscription creation.  If PI Points are added or deleted in PI for example the query and or subscription name may need to be changed.  If the query results change a new subscription identification name is needed.  
  • The reference solution captures the PI Point's Point ID.  The assumption is that the narrow table is linked to a table containing Point metadata.

 

Overall Solution Configuration​

The following describes the Intelligence Hub configuration objects.​

  • Obtain data from PI using the PI Connection Point Changes Input to get all value changes.
  • Model the desired output schema using an Intelligence Hub Model.
  • Create a Pipeline with Flow stage configured with the desired polling frequency.
  • Configure the Connection Input as the Reference of the Flow stage. ​
  • Configure the Connection Output for the respective data store. ​​


Pipeline Configuration ​

The following describes the Intelligence Hub Pipeline design​.

  • Breakup the incoming object sending the transaction type to metadata (add, insert, update, or delete).
  • Breakup the resulting array.
  • Determine of the value is a number or not.
  • Map the payload to the predefined schema using the Intelligence Hub Model.
  • Buffer transactions using a strategy that aligns with the solution requirements for latency.  It may be beneficial to buffer transactions for a cloud data warehouse, like Snowflake for example.
  • Consider the use of the Buffer Key based on use case​.
  • Create the desired file format ​for the data lake if applicable.
  • Write to the data lake configuring the desired file name format.

 

Other Considerations 

The following should be considered related to this design ​

  • The PI Connection Points Changes Input query is capable of handling thousands of PI Point values depending on the frequency of data change.​
  • The reference example simply considers whether the value is a number or not.  Alternatively, the attribute's data type could be obtained from the Intelligence Hub PI Connection Point Browse Connection.

 

Solution Video

The following video captures a reference solution.

 

Additional Resources