Skip to content
  • There are no suggestions because the search field is empty.

How-To: Monitor Intelligence Hub Component Health

What Does This Article Cover?

  • Introduction
  • Quick Start Guide to Monitor a Single Connection or Pipeline
  • Example Output Data
  • Utilizing Intelligencehub-events.log with an External Consumer
  • Utilizing the REST Data Server to Obtain Component Health
  • Attached Project

Introduction

Monitoring Connections and Pipelines can be crucial in manufacturing and Industrial IoT architecture. This guide provides a step-by-step process to monitor Connections and Pipelines and transmit their state data to your preferred output. In this guide we provide a sample project that sends connection and pipeline data to MQTT and our log file. However, you can also use this guide to send the payloads to another connection you've configured within the HighByte Intelligence Hub, access status data in an instance using our REST data server or output the data through a REST output to third-party tools like Splunk,  Datadog or Prometheus

Quick Start Guide to Monitor a Single Connection or Pipeline

  1. Select an output for your Pipeline
    1. Determine the destination where you want to send the status of a connection or pipeline too. 

      The sample project will use 'ComponentHealth/Connections/}' for a MQTT topic path. If you emulate this in your own project, the name of your pipeline and/or connection will appear in MQTT
  2. Navigate to Pipelines and Utilize the Pipeline Wizard 
    1. Select  'New Pipeline'
    2. Name your pipeline and select 'Build Flow'. This is the Intelligence Hub pipeline wizard. 
  3. Reference the component to be monitored as a Source
    1. Under references choose the 'System' Type. 
    2. Next, for System Type select 'Input'
    3. From here you are able to reference your Connections or Pipelines. You can expand out the references for a Connection or Pipeline to reference specific metrics. 
  4. Reference your Target
    1. In the reference panel select 'Output' for Type. 
    2. From here select the output you want to send your data to. 

Example Output Data

Connection Data

{
    "_name": "connectionStatus",
    "_model": "ConnectionStatus",
    "_timestamp": 1697488915351,
    "name": "MQTT",
    "lastError": "",
    "status": "Good",
    "statistics": {
        "pendingWrites": 0
    },
    "healthy": true
}

Pipeline Data

{
    "Pipelines": {
        "Pipeline_Example": {
            "name": "Pipeline_Example",
            "lastError": "",
            "state": "Good",
            "stageStatus": [
                {
                    "name": "Breakup",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "InputsOutputsFromInstance",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "Transform",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "Write",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "WriteNew",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "WriteNew_1",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "Transform_1",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                },
                {
                    "name": "Breakup_1",
                    "lastError": "",
                    "pipelineName": "Pipeline_Example",
                    "state": "Good",
                    "healthy": true
                }

            ],
            "statistics": {
                "idle": false,
                "totalRuns": 3776,
                "totalErrors": 0,
                "queuedWrites": 1,
                "lastCompleteTime": "2023-10-24T15:05:43.001Z",
                "executingSamples": {
                    "meanMS": 114,
                    "minMS": 91,
                    "maxMS": 152,
                    "stdDevMS": 17
                },
                "waitingSamples": {
                    "meanMS": 0,
                    "minMS": 0,
                    "maxMS": 0,
                    "stdDevMS": 0
                }
            },
            "healthy": true

        }
    }
}

Utilizing the intelligencehub-events.log with an External Log Consumer

The Intelligence Hub captures errors in JSON format within the intelligencehub-events.log. For example, if a particular pipeline encounters an error, this will be documented in the log. By default Intelligence Hub does not store system metadata for pipelines and connections in the log. To send system metadata to the log on an output, navigate to your reference panel, set the 'Type' to 'system', 'System Type' to 'Output', and drag the 'Info' reference to your pipeline target. By forwarding system metadata to the events log, third-party tools can access and analyze this log. 

It's important to note that within your settings, under the 'Logging' section, there's an option to modify the 'Max File Size'. Adjusting this ensures that your log consumer retrieves all the necessary data from the file before any old data gets overwritten by new entries.


Utilizing the REST Data Server to Obtain Component Health

Instead of creating a pipeline to send out data you can utilize the REST data server to access component Health data. This can be done by creating an instance with references to the or object. Use the /data/v1/instances/{instanceName}/Value endpoint with to get a response body based on the configuration of an instance. Example Instance 'REST_Data_Server_Instance' is provided in the sample project. 

Attached Project

Configuration Download 

We've attached a project that demonstrates different ways to monitor pipelines and connections. Download the project via the link above.  To import the project, within the Intelligence Hub go to Manage -> Project. Set the Full Project flag to off and change Import Type to 'File and upload the project. Upon loading the project in make sure your MQTT broker is enabled by navigating to 'Settings'. If you need to adjust the MQTT Broker ports in settings, please also adjust the ports in the 'MQTT' connection that was imported within this project.

This project is designed for easy import into your Intelligence Hub runtime, allowing you to instantly receive metrics for your connections and pipelines in our UNS Client. Keep in mind that the model provided is an example and can be customized to suit your preferences.

Sending Component Status and Data to MQTT

"Monitor_All_To_MQTT" is a Pipeline that has a branches that read in the and references. After the reads, there is a breakup stage to breakup Connections and Pipelines into their own payloads. Then a Model Stage is used to Model the data using the Componenet_Health_Model. Lastly, discrete outputs are used to ensure Connections and Pipelines are routed into separate MQTT topic paths. The output paths use the object's name dynamically to ensure each breakup object goes to a unique path.

Because this pipeline reads in and all connections and pipelines that get added to your project will be monitored. These references are located under Type : System and System Type : Input. These references will reflect the current projects configuration at the time of read, thus all new pipelines and connections will be included each time this reference is used.

Instead of outputting to MQTT you could send this payload or a transformed version of it out of REST output to an observability platform.

 
Sending Component Status and Data to the Event Log

"Monitor_All_To_Event_Log" is a pipeline that is very similar to the "Monitor_All_To_MQTT". The only change is that the pipeline is writing to the EventLog using the reference. 





Modeling Health Data in an Instance

The project also has a "REST_Data_Server_Model" which models some of the data from the System.Component reference. 

You could use the /data/v1/instances/{instanceName}/Value endpoint to get the instance value from our REST Data Server. 


Example Response: 

{

    "Name": "MQTT",
    "Healthy": true,
  "Status": "Good",
"LastError": "Connect operation failed on uri tcp://localhost:1885. Cause: Connection refused: getsockopt."

}

 

Conclusion

Monitoring Connections and Pipelines via MQTT is an efficient method to track and report statuses. The process outlined in this article can be adapted to various outputs and can be an integral part of your digital transformation strategy.