Getting Started: Recognizing Value from a POC
How to estimate the value of an Intelligence Hub approach.
What Does This Article Cover?
This article will cover considerations for recognizing value with your Intelligence Hub Proof of Concept (POC). Using this article, HighByte Champions can estimate the cost savings of using the Intelligence Hub compared to other approaches.
- What is the purpose of a POC?
- Technical Debt
- Value of Scalability
- Value of Maintainability
- Considerations for Calculating Cost
- Summary
What is the purpose of a POC?
HighByte offers a free 30-day POC for the Intelligence Hub. To ensure you get the most out of your POC, we encourage you to focus on a specific use case that solves an actual problem or unlocks a valuable opportunity within your organization. The goal is accelerating your understanding of the Intelligence Hub with a practical use case to deliver tangible results and value to your organization.
Technical Debt:
When calculating the value of an Intelligence Hub solution, comparing use case approaches with other methods is recommended. Other data operations approaches, such as custom coding integrations, often incur technical debt over time. Technical debt refers to the direct and indirect future costs incurred when solutions are developed or implemented quickly using unscalable approaches, typically to meet immediate needs without considering long-term maintainability or scalability. Consider the following examples of technical debt:
- Development and maintenance of custom scripts: Relying on custom code for data operations use cases can lead to significant technical debt, making it challenging to scale use cases across sites. Consider the following examples:
- In custom code, examples of where technical debt can reside are knowledge of the complexity within selected programming languages and libraries, which is often undocumented domain knowledge embedded in custom logic. This creates a critical reliance on the original author of the use case to maintain throughout its lifecycle.
- When scaling use cases to a new site, the custom logic will often need to be re-engineered to account for the uniqueness of the new site.
- Inefficient data preparation practices: Data scientists may spend excessive time finding, massaging, and preparing data for analytics, hindering timely decision-making.
- Data preparation is likely embedded in a point-to-point solution. This leads to a "spaghetti architecture" where operational teams have many locations to manage integrations.
- Data preparation is conducted in a manner that is removed from the domain knowledge experts. This approach is very inefficient as data scientists need to engage the domain experts to efficiently clean and prepare operational data.
- Forcing data operations use cases into SCADA or IoT platforms: When developing use cases on platforms that are not designed for data operations, several forms of technical debt can accumulate due to misalignment between the platform’s capabilities and the needs of data-driven applications:
- Data Modeling challenges: SCADA and IoT platform data models are generally designed for local consumption and not for diverse external consuming systems. This creates challenges in adding context, governance, and enablement of use cases in other consuming systems.
- Unoptimized Data Processing: Data processing options in SCADA and IoT platforms are often simplistic, designed for basic thresholds rather than offering flexible polling and event processing logic.
- Lack of scalability options: SCADA and IoT platforms cannot generally quickly scale use cases to meet full payload requirements without relying on custom code. The use cases also do not quickly scale from site to site, given the heavy reliance on custom code.
Value of Scalability:
The full value of the Intelligence Hub is recognized when usage begins to scale. Scale should be considered in multiple ways:
- Site Scale: Develop an Intelligence Hub solution once and utilize dynamic design capabilities to scale to many data points at the licensed site. An example of this is scaling an AVEVA PI System to Data Lake solution from 100 source PI points to 100,000. The time spent in the Intelligence Hub scaling this use case to the final 100,000 source points is a fraction of the time spent with other development methods. When comparing to other solutions, custom code will likely need to be written to achieve the final 100,000 points. This custom code is unlikely to be efficiently utilized at other sites, resulting in a significant amount of technical debt that operations teams must manage over time.
- Enterprise Scale: Develop Intelligence Hub solutions and utilize dynamic design capabilities to scale to other sites using Central Configuration or CI CD processes. An example is scaling a standard PI to Data Lake solution from 1 site to 10 sites, delivering standard configurations (Models, Transformations, Pipelines, etc.) to other sites. The time spent scaling this solution to the other 10 sites is a fraction of the time spent compared to other methods. Generally, in sites 2-10, the only solution updates required will be connecting and mapping source data to the use case configurations. When comparing to other approaches, likely new custom code will need to be written to accommodate the unique aspects of sites 2-10, resulting in additional technical debt that operations teams must manage over time.
- Scale Intelligence Hub Usage: The design concepts and principles experienced with the POC use case solution can be applied to other use cases. Once source systems are connected, users can quickly build many inputs to execute other use cases. Consider our previous PI to Data Lake use case; perhaps another business group requires a different PI dataset, I.E., Asset vs. Event Frames. A data engineer can build a new input on the existing PI Connection and efficiently execute a new solution in less time than other methods. Or perhaps another business group requires the same data in a different target system. A data engineer can build a secondary target within the existing Intelligence Hub Pipeline and deliver data to many targets. These additional use cases can be quickly executed with the abstraction layer that Intelligence Hub provides, enabling data engineers to execute on use case backlogs.
Value of Maintainability:
DataOps solutions evolve over time, requiring continuous lifecycle management, adaptability, and maintainability to meet changing needs. HighByte's hub-and-spoke approach offers organizations a highly versatile abstraction layer, helping to eliminate technical debt. Consider the following scenarios when estimating the total cost of solution maintenance:
- Changes to source and target systems: Changes to source and target systems require data engineers to refactor solution definitions. Most commonly, software upgrades or complete software replacements require integration details to be tested, validated, and updated for production use cases. Intelligence Hub users can safely test in non-production environments and deploy updates to production through Central Configuration sync or CI CD processes. For example, if an ERP system has been entirely replaced by a new vendor, a data engineer can create a new Connection and update the Pipeline Connection without re-engineering the whole solution.
- Use case requirements: Data consumers will inevitably have evolving requirements. With the initial use case's success, new possibilities for industrial data will naturally occur. Consumers will request more data points, different data formats, and various target systems, all of which will necessitate updates to the use case definitions. With the abstraction layer Intelligence Hub provides, coupled with the hub and spoke model, users can quickly test and implement solution updates in a development environment, validate and sync updates into production with minimal effort.
Considerations for Calculating Cost:
Consider the following when engaging key stakeholders and decision makers:
When calculating the value and return on investment with Intelligence Hub POCs, it's recommended to compare use case execution with existing or other methods of execution. Consider the following:
- Can the use case be executed without the Intelligence Hub?
- If so, how much time is spent on new use case development with other methods?
- How much maintenance is expected for this use case over time, as source and target systems experience changes?
- How much time is expected to re-engineer if the use case requirements change?
- How much does it cost to scale the use case to other locations?
When performing an Intelligence Hub cost analysis, assumptions may need to be made regarding personnel requirements, engineering hours, and expected maintenance over time. Even with conservative estimates, the use of the Intelligence Hub remains cost-effective, offering significant savings and efficiency compared to alternative approaches.