Refine Your Search

Search Results

Viewing 1 to 8 of 8
Journal Article

Managing the Computational Cost of Monte Carlo Simulation with Importance Sampling by Considering the Value of Information

2013-04-08
2013-01-0943
Importance Sampling is a popular method for reliability assessment. Although it is significantly more efficient than standard Monte Carlo simulation if a suitable sampling distribution is used, in many design problems it is too expensive. The authors have previously proposed a method to manage the computational cost in standard Monte Carlo simulation that views design as a choice among alternatives with uncertain reliabilities. Information from simulation has value only if it helps the designer make a better choice among the alternatives. This paper extends their method to Importance Sampling. First, the designer estimates the prior probability density functions of the reliabilities of the alternative designs and calculates the expected utility of the choice of the best design. Subsequently, the designer estimates the likelihood function of the probability of failure by performing an initial simulation with Importance Sampling.
Technical Paper

A Cost-Driven Method for Design Optimization Using Validated Local Domains

2013-04-08
2013-01-1385
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, we have previously proposed an approach where design optimization and model validation, are concurrently performed using a sequential approach with variable-size local domains. We used test data and statistical bootstrap methods to size each local domain where the prediction model is considered validated and where design optimization is performed. The method proceeds iteratively until the optimum design is obtained. This method however, requires test data to be available in each local domain along the optimization path. In this paper, we refine our methodology by using polynomial regression to predict the size and shape of a local domain at some steps along the optimization process without using test data.
Journal Article

Decision-Making for Autonomous Mobility Using Remotely Sensed Terrain Parameters in Off-Road Environments

2021-04-06
2021-01-0233
Off-road vehicle operation requires constant decision-making under great uncertainty. Such decisions are multi-faceted and range from acquisition decisions to operational decisions. A major input to these decisions is terrain information in the form of soil properties. This information needs to be propagated to path planning algorithms that augment them with other inputs such as visual terrain assessment and other sensors. In this sequence of steps, many resources are needed, and it is not often clear how best to utilize them. We present an integrated approach where a mission’s overall performance is measured using a multiattribute utility function. This framework allows us to evaluate the value of acquiring terrain information and then its use in path planning. The computational effort of optimizing the vehicle path is also considered and optimized. We present our approach using the data acquired from the Keweenaw Research Center terrains and present some results.
Journal Article

Quantum Explanations for Interference Effects in Engineering Decision Making

2022-03-29
2022-01-0215
Engineering practice routinely involves decision making under uncertainty. Much of this decision making entails reconciling multiple pieces of information to form a suitable model of uncertainty. As more information is collected, one expectedly makes better and better decisions. However, conditional probability assessments made by human decision makers, as new information arrives does not always follow expected trends and instead exhibits inconsistencies. Understanding them is necessary for a better modeling of the cognitive processes taking place in their mind, whether it be the designer or the end-user. Doing so can result in better products and product features. Quantum probability has been used in the literature to explain many commonly observed deviations from the classical probability such as: question order effect, response replicability effect, Machina and Ellsberg paradoxes and the effect of positive and negative interference between events.
Journal Article

Balancing Lifecycle Sustainment Cost with Value of Information during Design Phase

2020-04-14
2020-01-0176
The complete lifecycle of complex systems, such as ground vehicles, consists of multiple phases including design, manufacturing, operation and sustainment (O&S) and finally disposal. For many systems, the majority of the lifecycle costs are incurred during the operation and sustainment phase, specifically in the form of uncertain maintenance costs. Testing and analysis during the design phase, including reliability and supportability analysis, can have a major influence on costs during the O&S phase. However, the cost of the analysis itself must be reconciled with the expected benefits of the reduction in uncertainty. In this paper, we quantify the value of performing the tests and analyses in the design phase by treating it as imperfect information obtained to better estimate uncertain maintenance costs.
Technical Paper

Topological Data Analysis for Navigation in Unstructured Environments

2023-04-11
2023-01-0088
Autonomous vehicle navigation, both global and local, makes use of large amounts of multifactorial data from onboard sensors, prior information, and simulations to safely navigate a chosen terrain. Additionally, as each mission has a unique set of requirements, operational environment and vehicle capabilities, any fixed formulation for the cost associated with these attributes is sub-optimal across different missions. Much work has been done in the literature on finding the optimal cost definition and subsequent mission pathing given sufficient measurements of the preference over the mission factors. However, obtaining these measurements can be an arduous and computationally expensive task. Furthermore, the algorithms that utilize this large amount of multifactorial data themselves are time consuming and expensive.
Technical Paper

Fault Diagnosis and Prediction in Automotive Systems with Real-Time Data Using Machine Learning

2022-03-29
2022-01-0217
In the automotive industry, a Malfunction Indicator Light (MIL) is commonly employed to signify a failure or error in a vehicle system. To identify the root cause that has triggered a particular fault, a technician or engineer will typically run diagnostic tests and analyses. This type of analysis can take a significant amount of time and resources at the cost of customer satisfaction and perceived quality. Predicting an impending error allows for preventative measures or actions which might mitigate the effects of the error. Modern vehicles generate data in the form of sensor readings accessible through the vehicle’s Controller Area Network (CAN). Such data is generally too extensive to aid in analysis and decision making unless machine learning-based methods are used. This paper proposes a method utilizing a recurrent neural network (RNN) to predict an impending fault before it occurs through the use of CAN data.
Technical Paper

High Dimensional Preference Learning: Topological Data Analysis Informed Sampling for Engineering Decision Making

2024-04-09
2024-01-2422
Engineering design-decisions often involve many attributes which can differ in the levels of their importance to the decision maker (DM), while also exhibiting complex statistical relationships. Learning a decision-making policy which accurately represents the DM’s actions has long been the goal of decision analysts. To circumvent elicitation and modeling issues, this process is often oversimplified in how many factors are considered and how complicated the relationships considered between them are. Without these simplifications, the classical lottery-based preference elicitation is overly expensive, and the responses degrade rapidly in quality as the number of attributes increase. In this paper, we investigate the ability of deep preference machine learning to model high-dimensional decision-making policies utilizing rankings elicited from decision makers.
X