Refine Your Search

Search Results

Viewing 1 to 14 of 14
Journal Article

Enhancing Decision Topology Assessment in Engineering Design

2014-04-01
2014-01-0719
Implications of decision analysis (DA) on engineering design are important and well-documented. However, widespread adoption has not occurred. To that end, the authors recently proposed decision topologies (DT) as a visual method for representing decision situations and proved that they are entirely consistent with normative decision analysis. This paper addresses the practical issue of assessing the DTs of a designer using their responses. As in classical DA, this step is critical to encoding the DA's preferences so that further analysis and mathematical optimization can be performed on the correct set of preferences. We show how multi-attribute DTs can be directly assessed from DM responses. Furthermore, we show that preferences under uncertainty can be trivially incorporated and that topologies can be constructed using single attribute topologies similarly to multi-linear functions in utility analysis. This incremental construction simplifies the process of topology construction.
Journal Article

A Simulation and Optimization Methodology for Reliability of Vehicle Fleets

2011-04-12
2011-01-0725
Understanding reliability is critical in design, maintenance and durability analysis of engineering systems. A reliability simulation methodology is presented in this paper for vehicle fleets using limited data. The method can be used to estimate the reliability of non-repairable as well as repairable systems. It can optimally allocate, based on a target system reliability, individual component reliabilities using a multi-objective optimization algorithm. The algorithm establishes a Pareto front that can be used for optimal tradeoff between reliability and the associated cost. The method uses Monte Carlo simulation to estimate the system failure rate and reliability as a function of time. The probability density functions (PDF) of the time between failures for all components of the system are estimated using either limited data or a user-supplied MTBF (mean time between failures) and its coefficient of variation.
Journal Article

System Topology Identification with Limited Test Data

2012-04-16
2012-01-0064
In this article we present an approach to identify the system topology using simulation for reliability calculations. The system topology provides how all components in a system are functionally connected. Most reliability engineering literature assumes that either the system topology is known and therefore all failure modes can be deduced or if the system topology is not known we are only interested in identifying the dominant failure modes. The authors contend that we should try to extract as much information about the system topology from failure or success information of a system as possible. This will not only identify the dominant failure modes but will also provide an understanding of how the components are functionally connected, allowing for more complicated analyses, if needed. We use an evolutionary approach where system topologies are generated at random and then tested against failure or success data. The topologies evolve based on how consistent they are with test data.
Journal Article

A Nonparametric Bootstrap Approach to Variable-size Local-domain Design Optimization and Computer Model Validation

2012-04-16
2012-01-0226
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, a recent approach was proposed where design optimization and model validation were concurrently performed using a sequential approach with both fixed and variable-size local domains. The variable-size approach used parametric distributions such as Gaussian to quantify the variability in test data and model predictions, and a maximum likelihood estimation to calibrate the prediction model. Also, a parametric bootstrap method was used to size each local domain. In this article, we generalize the variable-size approach, by not assuming any distribution such as Gaussian. A nonparametric bootstrap methodology is instead used to size the local domains. We expect its generality to be useful in applications where distributional assumptions are difficult to verify, or not met at all.
Journal Article

Multi-Objective Decision Making under Uncertainty and Incomplete Knowledge of Designer Preferences

2011-04-12
2011-01-1080
Multi-attribute decision making and multi-objective optimization complement each other. Often, while making design decisions involving multiple attributes, a Pareto front is generated using a multi-objective optimizer. The end user then chooses the optimal design from the Pareto front based on his/her preferences. This seemingly simple methodology requires sufficient modification if uncertainty is present. We explore two kinds of uncertainties in this paper: uncertainty in the decision variables which we call inherent design problem (IDP) uncertainty and that in knowledge of the preferences of the decision maker which we refer to as preference assessment (PA) uncertainty. From a purely utility theory perspective a rational decision maker maximizes his or her expected multi attribute utility.
Journal Article

Managing the Computational Cost of Monte Carlo Simulation with Importance Sampling by Considering the Value of Information

2013-04-08
2013-01-0943
Importance Sampling is a popular method for reliability assessment. Although it is significantly more efficient than standard Monte Carlo simulation if a suitable sampling distribution is used, in many design problems it is too expensive. The authors have previously proposed a method to manage the computational cost in standard Monte Carlo simulation that views design as a choice among alternatives with uncertain reliabilities. Information from simulation has value only if it helps the designer make a better choice among the alternatives. This paper extends their method to Importance Sampling. First, the designer estimates the prior probability density functions of the reliabilities of the alternative designs and calculates the expected utility of the choice of the best design. Subsequently, the designer estimates the likelihood function of the probability of failure by performing an initial simulation with Importance Sampling.
Technical Paper

Managing the Computational Cost in a Monte Carlo Simulation by Considering the Value of Information

2012-04-16
2012-01-0915
Monte Carlo simulation is a popular tool for reliability assessment because of its robustness and ease of implementation. A major concern with this method is its computational cost; standard Monte Carlo simulation requires quadrupling the number of replications for halving the standard deviation of the estimated failure probability. Efforts to increase efficiency focus on intelligent sampling procedures and methods for efficient calculation of the performance function of a system. This paper proposes a new method to manage cost that views design as a decision among alternatives with uncertain reliabilities. Information from a simulation has value only if it enables the designer to make a better choice among the alternative options. Consequently, the value of information from the simulation is equal to the gain from using this information to improve the decision. A designer can determine the number of replications that are worth performing by using the method.
Technical Paper

System Failure Identification using Linear Algebra: Application to Cost-Reliability Tradeoffs under Uncertain Preferences

2012-04-16
2012-01-0914
Reaching a system level reliability target is an inverse problem. Component level reliabilities are determined for a required system level reliability. Because this inverse problem does not have a unique solution, one approach is to tradeoff system reliability with cost and to allow the designer to select a design with a target system reliability, using his/her preferences. In this case, the component reliabilities are readily available from the calculation of the reliability-cost tradeoff. To arrive at the set of solutions to be traded off, one encounters two problems. First, the system reliability calculation is based on repeated system simulations where each system state, indicating which components work and which have failed, is tested to determine if it causes system failure, and second, the task of eliciting and encoding the decision maker's preferences is extremely difficult because of uncertainty in modeling the decision maker's preferences.
Technical Paper

A Cost-Driven Method for Design Optimization Using Validated Local Domains

2013-04-08
2013-01-1385
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, we have previously proposed an approach where design optimization and model validation, are concurrently performed using a sequential approach with variable-size local domains. We used test data and statistical bootstrap methods to size each local domain where the prediction model is considered validated and where design optimization is performed. The method proceeds iteratively until the optimum design is obtained. This method however, requires test data to be available in each local domain along the optimization path. In this paper, we refine our methodology by using polynomial regression to predict the size and shape of a local domain at some steps along the optimization process without using test data.
Journal Article

Decision-Making for Autonomous Mobility Using Remotely Sensed Terrain Parameters in Off-Road Environments

2021-04-06
2021-01-0233
Off-road vehicle operation requires constant decision-making under great uncertainty. Such decisions are multi-faceted and range from acquisition decisions to operational decisions. A major input to these decisions is terrain information in the form of soil properties. This information needs to be propagated to path planning algorithms that augment them with other inputs such as visual terrain assessment and other sensors. In this sequence of steps, many resources are needed, and it is not often clear how best to utilize them. We present an integrated approach where a mission’s overall performance is measured using a multiattribute utility function. This framework allows us to evaluate the value of acquiring terrain information and then its use in path planning. The computational effort of optimizing the vehicle path is also considered and optimized. We present our approach using the data acquired from the Keweenaw Research Center terrains and present some results.
Journal Article

Quantum Explanations for Interference Effects in Engineering Decision Making

2022-03-29
2022-01-0215
Engineering practice routinely involves decision making under uncertainty. Much of this decision making entails reconciling multiple pieces of information to form a suitable model of uncertainty. As more information is collected, one expectedly makes better and better decisions. However, conditional probability assessments made by human decision makers, as new information arrives does not always follow expected trends and instead exhibits inconsistencies. Understanding them is necessary for a better modeling of the cognitive processes taking place in their mind, whether it be the designer or the end-user. Doing so can result in better products and product features. Quantum probability has been used in the literature to explain many commonly observed deviations from the classical probability such as: question order effect, response replicability effect, Machina and Ellsberg paradoxes and the effect of positive and negative interference between events.
Technical Paper

Decision-Based Universal Design - Using Copulas to Model Disability

2015-04-14
2015-01-0418
This paper develops a design paradigm for universal products. Universal design is term used for designing products and systems that are equally accessible to and usable by people with and without disabilities. Two common challenges for research in this area are that (1) There is a continuum of disabilities making it hard to optimize product features, and (2) There is no effective benchmark for evaluating such products. To exacerbate these issues, data regarding customer disabilities and their preferences is hard to come by. We propose a copula-based approach for modeling market coverage of a portfolio of universal products. The multiattribute preference of customers to purchase a product is modeled as Frank's Archimedean Copula. The inputs from various disparate sources can be collected and incorporated into a decision system.
Technical Paper

Topological Data Analysis for Navigation in Unstructured Environments

2023-04-11
2023-01-0088
Autonomous vehicle navigation, both global and local, makes use of large amounts of multifactorial data from onboard sensors, prior information, and simulations to safely navigate a chosen terrain. Additionally, as each mission has a unique set of requirements, operational environment and vehicle capabilities, any fixed formulation for the cost associated with these attributes is sub-optimal across different missions. Much work has been done in the literature on finding the optimal cost definition and subsequent mission pathing given sufficient measurements of the preference over the mission factors. However, obtaining these measurements can be an arduous and computationally expensive task. Furthermore, the algorithms that utilize this large amount of multifactorial data themselves are time consuming and expensive.
Technical Paper

High Dimensional Preference Learning: Topological Data Analysis Informed Sampling for Engineering Decision Making

2024-04-09
2024-01-2422
Engineering design-decisions often involve many attributes which can differ in the levels of their importance to the decision maker (DM), while also exhibiting complex statistical relationships. Learning a decision-making policy which accurately represents the DM’s actions has long been the goal of decision analysts. To circumvent elicitation and modeling issues, this process is often oversimplified in how many factors are considered and how complicated the relationships considered between them are. Without these simplifications, the classical lottery-based preference elicitation is overly expensive, and the responses degrade rapidly in quality as the number of attributes increase. In this paper, we investigate the ability of deep preference machine learning to model high-dimensional decision-making policies utilizing rankings elicited from decision makers.
X