In the past couple of decades, the performance, reliability, and safety assessments performed using predictive capability for electromechanical systems have been on the rise. Such capability has a tremendous impact during the product development cycle to shorten development times and reduce downstream design changes. However, no standard exists in the truck and off-highway engineering design industry for representing the degree to which a model (i.e., complex predictive and simulation models, such as those from Simulink, FE, CFD, etc.) has been validated. Such a standard can enable engineers and managers with a tool to assess the maturity of predictive capability itself. This article presents three important factors providing a basis for why such a standard is required, how it can benefit the industry in the future, what is available in other industries, and what is required for developing a standard applicable for this industry.
Why a standard is required
First, with an increase in adoption of predictive capability approaches in product design and development, there is a parallel increase, perhaps non-linearly, in the number of modeling and simulation (M&S) software that helps to achieve a design firm’s predictive capability goal. Although it’s good to have a wide range of software products available in the market for making a choice—whether or not to purchase a new one or to transit from one to another—it involves a degree of risk from the buyer’s standpoint. The risk is to estimate which software is more reliable in terms of the fundamental elements that contribute to M&S: (i) Physics modeling fidelity; (ii) code verification; (iii) solution verification; and (iv) model validation and uncertainty quantification.
Software manufacturers diligently work to address these factors during the development cycle. However, there is a “confidence building” phase in which the software seller works with the buyer to demonstrate their software’s performance capability, which can be time-consuming, costly and limits resources’ availability for both parties.
Second, the increased reliance on the supplier’s M&S data to support design decisions during product development presents risk, specifically in an extended enterprise business format. Figure 1 illustrates how a buyer (typically an OEM) has to rely on M&S results to choose an advanced tire design for a set of design requirements.
As suppliers commit to their design based on their M&S results, buyers also commit and absorb the associated risk. The downside is buyers are unaware if the model validation conforms to their application domain. For instance, using an on-highway tire structural model to predict tire performance for an off-highway application is a potential source of risk (see Scenario III in Figure 2) because the modeling parameters (e.g., road to tire contact model) may not be applicable to both domains (on-highway and off-highway). Mismatch between a model’s validation and buyer’s application domain may manifest as a product failure after 24-36 months of product development effort. The ideal scenario would be Scenario I (ref Figure 2), but how a buyer can ensure whether a Scenario II or I is satisfied is not yet formalized.
The third factor involves a model-reuse scenario (see Figures 3 and 4) wherein an engineer relies on a previously built predictive model for evaluating engineering changes. An engineer reusing the model for a change evaluation and the engineer developing the model, in some instances, are different for a multitude of reasons. However, the model created stays within a design firm’s database, and reusing it is the best course of action.
The challenge at this point is to bear the risk of the model outcome with little knowledge about the extent of model validation. He/she has to spend time in learning about the model and develop a self-assessment on the extent of model validation. How long one spends on the self-assessment activity depends on the model complexity and his or her experience. In a large-scale design project, these delays are unaffordable. In addition, using an uncalibrated model with an assumption that it is previously validated and skipping the self-assessment activity could be detrimental in an engineering change scenario. Thus, the time invested in this activity is a necessity, not an option.
Thus, the question is how to evaluate the risk associated with a model in a design and software change/selection scenario and reduce associated cost, time and resource (CTR) constraints. Risk reduction is possible with awareness of known unknowns. For instance, is the model correlation within 10%? Is the application and validation domain comparable? Such questions may help in finding out answers; however, if there is a metric to represent the degree of model validation, the problems described earlier may be better addressed. Such a model validation metric (MVM) does not exist in the industry yet.
Roadmap for model assessment
In order to reduce CTR constraints, there is a need to develop a set of guidelines to assess models. NASA, the U.S. Department of Defense, and Sandia National Labs have attempted to develop different model assessment schemes, and the recent advancement in this effort is the predictive capability maturity model (PCMM). It evaluates key elements in a prediction or simulation model: (i) Representation and geometric fidelity; (ii) physics and material model fidelity; (iii) code verification; (iv) solution verification; (v) model validation; and (vi) uncertainty quantification and sensitivity analysis.
Using a four-point ordinal scale, PCMM assesses a model on these individual elements to represent an overall assessment. The result is a numerical set where each value in that set corresponds to one of the six elements’ assessment level. It provides a subjective qualitative assessment for users to help them evaluate the risk involved (see Figure 5). The color scheme indicates how close the maturity level of a model relative to the requirement is: The larger the gap between the required maturity level as against the assessed level, the higher the risk associated with the model.
Is it adequate to adapt the PCMM approach in the off-highway and truck industry? No. It may be a good starting point but its applicability in this industry is not yet explored enough to draw a generalization. Thus, a collaborative effort is required to develop a standard for model assessment along with a process for self-certifying models.
Taking it one step further, an independent model certification agency (MCA) would be ideal to eliminate concerns of subjectivity on a self-certified assessment metric. Yes, an MCA is a futuristic vision, but discussions need to be initiated regarding potential benefits for the industry by developing a model validation metric in different forums and conferences.
Dr. Prabhu Shankar, Ph.D., Sr. Principal Engineer - Powertrain, JLG Industries Inc., An Oshkosh Corporation Company, wrote this article for Truck & Off-Highway Engineering.
Continue reading »