Skip to content
  • There are no suggestions because the search field is empty.

Reference Solution: AVEVA PI System PI Large Number of Assets

This article captures the design considerations for obtaining data from AVEVA PI System Data Archive when there are a large number of asset configured.

What Does This Article Cover?

Intelligence Hub solutions can be created to obtain values and asset metadata and attribute metadata from AVEVA PI System and send modeled data to a data lake or data warehouses, for example.  This can become challenging when there are a large number of assets configured in PI System Asset Framework.  The following provides an example of how a solution can be created when there are a large number of assets configured in PI Asset Framework.

Design Assumptions

The following are some of the design considerations.

  • When the queries for Intelligence Hub PI Connection Inputs obtaining data from Asset Framework are long in duration it may be necessary to store asset data, asset metadata, and or point data in SQLite tables.  (When there are a large number of PI Asset Framework assets it might not be possible to query PI AF for metadata for all assets due to query timeout).
  • There might be multiple root level assets defined in PI Asset Framework.
  • This design assumes that an Intelligence Hub PI Connection Input Asset Metadata query to return all children for a root asset does not timeout when querying any of the root assets (attributes do not need to be returned).  This is a key aspect of this design.  There must be adequate system resources to return this query.  
  • This design approach assumes that it is acceptable to infrequently obtain data for the purpose of providing asset, attribute, or point metadata to a destination system.  In this design values may be obtained frequently or based on subscription and metadata changes infrequently and is therefore obtained infrequently.  
  • PI Asset Framework asset names are not unique and therefore elementID should be used in PI Intelligence Hub PI Connection Inputs.
     

Overall Solution Configuration​

The following describes the Intelligence Hub configuration objects.​

  • Processes each of the following sequentially and continuously to detect new assets 
    • Obtain asset data for all PI Asset Framework root level assets and write to an assets SQLite table
    • Obtain child asset data per PI Asset Framework root level asset and write to an assets SQLite table
  • When all asset data has been stored in the assets SQLite table begin obtaining asset metadata and asset attribute data and values.
  • Intelligence Hub Connections are single threaded. Therefore, assume that a solution may utilize many Intelligence Hub Connections to obtain data from the PI System.
  • The Intelligence Hub PI Connections should be configured with long Request Timeout settings (for example 90 seconds) and Gzip compression.
  • Create SQLite queries to return lists of elementIDs for each data set type being obtained from PI asset metadata and asset attribute data and values.
  • Create one or more Pipelines for each data set type being obtained from PI asset metadata and asset attribute data and values.


Pipeline Configuration ​

The following describes the Intelligence Hub Pipeline design​.

  • Create an Flow Trigger stage configured for the desired frequency.  Metadata solutions should not need to run frequently.
  • The reference for each Flow Trigger should be a SQLite Input that queries the asset table for elementIDs.  Asset metadata and value solutions might process thousands of assets while an attribute metadata solution might process hundreds of assets.
  • Create a string of comma separated elementIDs that will be used for the respective PI Connection Input.  
  • Obtain data from PI using a parameterized Read stage that passes the list of elementIDs to the Connection Input. 
  • Process the returned data as needed. 
  • Buffer transactions using a strategy that aligns with the solution requirements for latency.  It may be beneficial to buffer transactions for a cloud data warehouse, like Snowflake for example.
  • Consider the use of the Buffer Key based on use case​.
  • Create the desired file format ​for the data lake if applicable.
  • Write to the data lake configuring the desired file name format.

 

Other Considerations 

The following should be considered related to this design ​

  • If assets are deleted in PI Asset Framework they may need to be maintained in the  Assets SQLIte table to enable historical data solutions.
  • Intelligence Hub Connections are single threaded. Therefore, assume that a solution for a given data set may utilize a dedicated Intelligence Hub Connection to obtain data from the PI System.
  • Consider the value of modeling in Intelligence Hub if the PI attribute names should not be used as the payload's attribute names.

 

Solution Video

The following video captures a reference solution.

 

Additional Resources