Gathering Greater Insight from Combined Asset-Process Digital Twins
12 December 2019
by Murali Mandi, Chief Operating Officer Honeywell Connected Enterprise, Industrial
In asset-intensive production environments – from offshore platforms to refineries, pit mines to pulp and paper mills – business leaders face unremitting pressure to compete through maximized uptime and efficiency of their industrial operations. In response, they are investing heavily in software technologies that help drive continuous improvement and greater overall equipment effectiveness (OEE) across their enterprise. To this end, the integration of rich data sources – from both plant machinery and the processes served by the machinery – is a key element in the application of modern predictive analytics and digital twins. By harnessing the right data and leveraging domain knowledge throughout the evolution of industrial equipment design, plant operations leaders will set the stage to reveal opportunities for greater process optimization, increased asset reliability, and overall productivity gains over and above improvements achieved already.
Process and assets have always had a synergistic relationship, in both operation and evolution. From a historic perspective, for decades process engineers worked directly with original equipment manufacturers (OEM) to maximize that synergy by pushing asset designs to best align with process quality and other operational needs. Through these interactive relationships, OEMs evolved plant process equipment and rotating machinery designs to address industry’s need for a balance of throughput, energy efficiency, and asset reliability and longevity. Maximized designs were achieved through continuous application of advanced metallurgies, precision manufacturing methods, and refinement by finite-element performance modeling methods – just to name a few. However, as machine designs matured, incremental improvement through successive design iterations became increasingly difficult to capture.
Instead, acceleration of subsequent improvement was found through utilization of advancing technologies in specialized monitoring and optimization solutions, such as machine condition monitoring systems, process modeling software, and advanced process control methods. Today, these advanced or “expert” systems synergize data collection with data processing for specific needs, for example, specifically for process or asset health. The data derived from these independent systems (process historians and advanced process control) serves personnel operating the plant delivered to maintenance teams and rotating equipment groups (machine diagnostic systems), and to process engineering teams tasked with controlling quality while maximizing throughput (process modeling systems).
Given such history, all would seem well, but there is at least one major caveat. Throughout the evolution of “expert” systems, the experts and their trusted tools matured relatively independent of one another. They did not naturally grow to work as collectively as they should – at least within the level of harmony that we now know is required to accelerate continuous improvement to next-level productivity gains.
The problem is not the people, of course; the issue is that distinct plant groups typically own the responsibilities of asset management and process management. Accordingly, both their tools and their data reside siloed within respective groups. Likewise, each group conducts its own complex and time-consuming analysis within the backdrop of a just-in-time production environment. And while each group appreciates the importance of each team’s domain experience and organizational role, they are simply too challenged to undertake the arduous task of manually collating data sources, synchronising data collection times, and interpreting anomalies represented in a combined dataset to reveal new improvement opportunities. It is within this setting that a confluence of process and asset domain experts and their systems’ data should resonate, but in practice often fail to align. Accordingly, this is where asset-process digital twins are most poised to provide a similar synergistic foundation as that underpinning plant technology’s evolution throughout history.
Digital twins are not new. The term may be new, but performance models built by OEMs for their engineering needs represent the archetype of digital twins. Where OEM models are not available, models may be created by independent companies or the end-users themselves, using engineering first principles. Whether OEM engineered or created by others, often such models are run as calculations in a special offline program, within a simple spreadsheet, or run in process simulation programs (the overall process digital twin).
For machine asset health, the twin is not nearly as obvious, but in many vibration-based condition monitoring and diagnostic software packages, the twin is effectively represented in graphical form. Each graphic or plot is generated from time-based waveform data. A simple example of this is the use of displacement type sensors on fluid-film bearing machines to determine, with high precision, the location of the rotating shaft within the bearing journals. It is a digital twin of the shaft’s position, which serves a highly useful purpose in determining machine unbalance, misalignment, and provides overall insight to the various forces acting on the rotating shaft, including forces from the lubricating oil itself. Similarly, in the case of roller-element bearing type machines, time waveforms are converted to frequency spectrums that may serve as a primary digital twin. Among other things, the twin represents the bearing’s elements (e.g. the balls in a ball bearing) and helps identify both general forces as well as deterioration affecting bearing surfaces. No matter how accurate the digital twin, the operation of the same piece of equipment can be quite diverse. The actual performance of the asset is a combination of machine design and operational environment, which is where the process with which the machine interacts becomes critical.
Rarely does the process digital twin operate in unison with the asset health digital twin. Typically, these two analysis methods only come together when the respective domain experts come together to reactively troubleshoot a major issue. An example may be when a process upset drives a centrifugal compressor into severe surge, causing costly downtime and damage. Another example may include unbalance attributed to non-uniform blade fouling from process changes, or unbalance and other issues arising when that same compressor ingests liquids produced by the process or by liquids ineffectively removed by dedicated upstream separation processes. Often the precursors to such events are identifiable from the combined expert system data presentations. However, since process historians typically do not hold such rich data and are incapable of sophisticated data processing, and the process and asset expert systems are unintegrated, instances of unplanned events are more frequent. The reality of this disconnect between the asset and process groups is for both machine and process parameter settings to be more conservative – and therefore inefficient.
Ultimately, an effective asset-process digital twin combines data from expert systems and provides automated delivery of key processed data. The latter point being most applicable to an asset health twin, which in its current graphical form requires a person to view and interpret key aspects of machine condition. The combined asset-process digital twin addresses this and provides the foundation for predictive analytics methods that far exceed analytics performed on historian data alone.
For more information, please visit: www.honeywellprocess.com