Refine Your Search

Topic

Author

Affiliation

Search Results

Journal Article

A Re-Analysis Methodology for System RBDO Using a Trust Region Approach with Local Metamodels

2010-04-12
2010-01-0645
A simulation-based, system reliability-based design optimization (RBDO) method is presented that can handle problems with multiple failure regions and correlated random variables. Copulas are used to represent the correlation. The method uses a Probabilistic Re-Analysis (PRRA) approach in conjunction with a trust-region optimization approach and local metamodels covering each trust region. PRRA calculates very efficiently the system reliability of a design by performing a single Monte Carlo (MC) simulation per trust region. Although PRRA is based on MC simulation, it calculates “smooth” sensitivity derivatives, allowing therefore, the use of a gradient-based optimizer. The PRRA method is based on importance sampling. It provides accurate results, if the support of the sampling PDF contains the support of the joint PDF of the input random variables. The sequential, trust-region optimization approach satisfies this requirement.
Journal Article

Piston Design Using Multi-Objective Reliability-Based Design Optimization

2010-04-12
2010-01-0907
Piston design is a challenging engineering problem which involves complex physics and requires satisfying multiple performance objectives. Uncertainty in piston operating conditions and variability in piston design variables are inevitable and must be accounted for. The piston assembly can be a major source of engine mechanical friction and cold start noise, if not designed properly. In this paper, an analytical piston model is used in a deterministic and probabilistic (reliability-based) multi-objective design optimization process to obtain an optimal piston design. The model predicts piston performance in terms of scuffing, friction and noise, In order to keep the computational cost low, efficient and accurate metamodels of the piston performance metrics are used. The Pareto set of all optimal solutions is calculated allowing the designer to choose the “best” solution according to trade-offs among the multiple objectives.
Journal Article

A Comparative Benchmark Study of using Different Multi-Objective Optimization Algorithms for Restraint System Design

2014-04-01
2014-01-0564
Vehicle restraint system design is a difficult optimization problem to solve because (1) the nature of the problem is highly nonlinear, non-convex, noisy, and discontinuous; (2) there are large numbers of discrete and continuous design variables; (3) a design has to meet safety performance requirements for multiple crash modes simultaneously, hence there are a large number of design constraints. Based on the above knowledge of the problem, it is understandable why design of experiment (DOE) does not produce a high-percentage of feasible solutions, and it is difficult for response surface methods (RSM) to capture the true landscape of the problem. Furthermore, in order to keep the restraint system more robust, the complexity of restraint system content needs to be minimized in addition to minimizing the relative risk score to achieve New Car Assessment Program (NCAP) 5-star rating.
Journal Article

Enhancing Decision Topology Assessment in Engineering Design

2014-04-01
2014-01-0719
Implications of decision analysis (DA) on engineering design are important and well-documented. However, widespread adoption has not occurred. To that end, the authors recently proposed decision topologies (DT) as a visual method for representing decision situations and proved that they are entirely consistent with normative decision analysis. This paper addresses the practical issue of assessing the DTs of a designer using their responses. As in classical DA, this step is critical to encoding the DA's preferences so that further analysis and mathematical optimization can be performed on the correct set of preferences. We show how multi-attribute DTs can be directly assessed from DM responses. Furthermore, we show that preferences under uncertainty can be trivially incorporated and that topologies can be constructed using single attribute topologies similarly to multi-linear functions in utility analysis. This incremental construction simplifies the process of topology construction.
Journal Article

Bootstrapping and Separable Monte Carlo Simulation Methods Tailored for Efficient Assessment of Probability of Failure of Structural Systems

2015-04-14
2015-01-0420
There is randomness in both the applied loads and the strength of systems. Therefore, to account for the uncertainty, the safety of the system must be quantified using its reliability. Monte Carlo Simulation (MCS) is widely used for probabilistic analysis because of its robustness. However, the high computational cost limits the accuracy of MCS. Smarslok et al. [2010] developed an improved sampling technique for reliability assessment called Separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables. This paper extends SMC to problems with multiple random variables and develops a novel method for estimation of the standard deviation of the probability of failure of a structure. The method is demonstrated and validated on reliability assessment of an offshore wind turbine under turbulent wind loads.
Journal Article

Uncertainty Assessment in Restraint System Optimization for Occupants of Tactical Vehicles

2016-04-05
2016-01-0316
We have recently obtained experimental data and used them to develop computational models to quantify occupant impact responses and injury risks for military vehicles during frontal crashes. The number of experimental tests and model runs are however, relatively small due to their high cost. While this is true across the auto industry, it is particularly critical for the Army and other government agencies operating under tight budget constraints. In this study we investigate through statistical simulations how the injury risk varies if a large number of experimental tests were conducted. We show that the injury risk distribution is skewed to the right implying that, although most physical tests result in a small injury risk, there are occasional physical tests for which the injury risk is extremely large. We compute the probabilities of such events and use them to identify optimum design conditions to minimize such probabilities.
Journal Article

Impact of Fuel Sprays on In-Cylinder Flow Length Scales in a Spark-Ignition Direct-Injection Engine

2017-03-28
2017-01-0618
The interaction of fuel sprays and in-cylinder flow in direct-injection engines is expected to alter kinetic energy and integral length scales at least during some portions of the engine cycle. High-speed particle image velocimetry was implemented in an optical four-valve, pent-roof spark-ignition direct-injection single-cylinder engine to quantify this effect. Non-firing motored engine tests were performed at 1300 RPM with and without fuel injection. Two fuel injection timings were investigated: injection in early intake stroke represents quasi-homogenous engine condition; and injection in mid compression stroke mimics the stratified combustion strategy. Two-dimensional crank angle resolved velocity fields were measured to examine the kinetic energy and integral length scale through critical portions of the engine cycle. Reynolds decomposition was applied on the obtained engine flow fields to extract the fluctuations as an indicator for the turbulent flow.
Journal Article

Value of Information for Comparing Dependent Repairable Assemblies and Systems

2018-04-03
2018-01-1103
This article presents an approach for comparing alternative repairable systems and calculating the value of information obtained by testing a specified number of such systems. More specifically, an approach is presented to determine the value of information that comes from field testing a specified number of systems in order to appropriately estimate the reliability metric associated with each of the respective repairable systems. Here the reliability of a repairable system will be measured by its failure rate. In support of the decision-making effort, the failure rate is translated into an expected utility based on a utility curve that represents the risk tolerance of the decision-maker. The algorithm calculates the change of the expected value of the decision with the sample size. The change in the value of the decision represents the value of information obtained from testing.
Technical Paper

Numerical Investigation of Snow Accumulation on a Sensor Surface of Autonomous Vehicle

2020-04-14
2020-01-0953
Autonomous Vehicles (AVs) operate based on image information and 3D maps generated by sensors like cameras, LIDARs and RADARs. This information is processed by the on-board processing units to provide the right actuation signals to drive the vehicle. For safe operation, these sensors should provide continuous high quality data to the processing units without interruption in all driving conditions like dust, rain, snow and any other adverse driving conditions. Any contamination on the sensor surface/lens due to rain droplets, snow, and other debris would result in adverse impact to the quality of data provided for sensor fusion and this could result in error states for autonomous driving. In particular, snow is a common contamination condition during driving that might block a sensor surface or camera lens. Predicting and preventing snow accumulation over the sensor surface of an AV is important to overcome this challenge.
Journal Article

An RBDO Method for Multiple Failure Region Problems using Probabilistic Reanalysis and Approximate Metamodels

2009-04-20
2009-01-0204
A Reliability-Based Design Optimization (RBDO) method for multiple failure regions is presented. The method uses a Probabilistic Re-Analysis (PRRA) approach in conjunction with an approximate global metamodel with local refinements. The latter serves as an indicator to determine the failure and safe regions. PRRA calculates very efficiently the system reliability of a design by performing a single Monte Carlo (MC) simulation. Although PRRA is based on MC simulation, it calculates “smooth” sensitivity derivatives, allowing therefore, the use of a gradient-based optimizer. An “accurate-on-demand” metamodel is used in the PRRA that allows us to handle problems with multiple disjoint failure regions and potentially multiple most-probable points (MPP). The multiple failure regions are identified by using a clustering technique. A maximin “space-filling” sampling technique is used to construct the metamodel. A vibration absorber example highlights the potential of the proposed method.
Journal Article

A Variable-Size Local Domain Approach to Computer Model Validation in Design Optimization

2011-04-12
2011-01-0243
A common approach to the validation of simulation models focuses on validation throughout the entire design space. A more recent methodology validates designs as they are generated during a simulation-based optimization process. The latter method relies on validating the simulation model in a sequence of local domains. To improve its computational efficiency, this paper proposes an iterative process, where the size and shape of local domains at the current step are determined from a parametric bootstrap methodology involving maximum likelihood estimators of unknown model parameters from the previous step. Validation is carried out in the local domain at each step. The iterative process continues until the local domain does not change from iteration to iteration during the optimization process ensuring that a converged design optimum has been obtained.
Journal Article

Time-Dependent Reliability of Random Dynamic Systems Using Time-Series Modeling and Importance Sampling

2011-04-12
2011-01-0728
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to time-dependent operating conditions and material properties, component degradation, etc. The reliability degradation with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended function successfully for a specified time interval. In this work, we consider the first-passage reliability which accounts for the first time failure of non-repairable systems. Methods are available in the literature, which provide an upper bound to the true reliability which may overestimate the true value considerably. Monte-Carlo simulations are accurate but computationally expensive.
Journal Article

A Simulation and Optimization Methodology for Reliability of Vehicle Fleets

2011-04-12
2011-01-0725
Understanding reliability is critical in design, maintenance and durability analysis of engineering systems. A reliability simulation methodology is presented in this paper for vehicle fleets using limited data. The method can be used to estimate the reliability of non-repairable as well as repairable systems. It can optimally allocate, based on a target system reliability, individual component reliabilities using a multi-objective optimization algorithm. The algorithm establishes a Pareto front that can be used for optimal tradeoff between reliability and the associated cost. The method uses Monte Carlo simulation to estimate the system failure rate and reliability as a function of time. The probability density functions (PDF) of the time between failures for all components of the system are estimated using either limited data or a user-supplied MTBF (mean time between failures) and its coefficient of variation.
Journal Article

Optimal Preventive Maintenance Schedule Based on Lifecycle Cost and Time-Dependent Reliability

2012-04-16
2012-01-0070
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. It also affects the scheduling for preventive maintenance. Reliability usually degrades with time increasing therefore, the lifecycle cost due to more frequent failures which result in increased warranty costs, costly repairs and loss of market share. In a lifecycle cost based design, we must account for product quality and preventive maintenance using time-dependent reliability. Quality is a measure of our confidence that the product conforms to specifications as it leaves the factory. For a repairable system, preventive maintenance is scheduled to avoid failures, unnecessary production loss and safety violations. This article proposes a methodology to obtain the optimal scheduling for preventive maintenance using time-dependent reliability principles.
Journal Article

System Topology Identification with Limited Test Data

2012-04-16
2012-01-0064
In this article we present an approach to identify the system topology using simulation for reliability calculations. The system topology provides how all components in a system are functionally connected. Most reliability engineering literature assumes that either the system topology is known and therefore all failure modes can be deduced or if the system topology is not known we are only interested in identifying the dominant failure modes. The authors contend that we should try to extract as much information about the system topology from failure or success information of a system as possible. This will not only identify the dominant failure modes but will also provide an understanding of how the components are functionally connected, allowing for more complicated analyses, if needed. We use an evolutionary approach where system topologies are generated at random and then tested against failure or success data. The topologies evolve based on how consistent they are with test data.
Journal Article

A Nonparametric Bootstrap Approach to Variable-size Local-domain Design Optimization and Computer Model Validation

2012-04-16
2012-01-0226
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, a recent approach was proposed where design optimization and model validation were concurrently performed using a sequential approach with both fixed and variable-size local domains. The variable-size approach used parametric distributions such as Gaussian to quantify the variability in test data and model predictions, and a maximum likelihood estimation to calibrate the prediction model. Also, a parametric bootstrap method was used to size each local domain. In this article, we generalize the variable-size approach, by not assuming any distribution such as Gaussian. A nonparametric bootstrap methodology is instead used to size the local domains. We expect its generality to be useful in applications where distributional assumptions are difficult to verify, or not met at all.
Journal Article

Multi-Objective Decision Making under Uncertainty and Incomplete Knowledge of Designer Preferences

2011-04-12
2011-01-1080
Multi-attribute decision making and multi-objective optimization complement each other. Often, while making design decisions involving multiple attributes, a Pareto front is generated using a multi-objective optimizer. The end user then chooses the optimal design from the Pareto front based on his/her preferences. This seemingly simple methodology requires sufficient modification if uncertainty is present. We explore two kinds of uncertainties in this paper: uncertainty in the decision variables which we call inherent design problem (IDP) uncertainty and that in knowledge of the preferences of the decision maker which we refer to as preference assessment (PA) uncertainty. From a purely utility theory perspective a rational decision maker maximizes his or her expected multi attribute utility.
Journal Article

Managing the Computational Cost of Monte Carlo Simulation with Importance Sampling by Considering the Value of Information

2013-04-08
2013-01-0943
Importance Sampling is a popular method for reliability assessment. Although it is significantly more efficient than standard Monte Carlo simulation if a suitable sampling distribution is used, in many design problems it is too expensive. The authors have previously proposed a method to manage the computational cost in standard Monte Carlo simulation that views design as a choice among alternatives with uncertain reliabilities. Information from simulation has value only if it helps the designer make a better choice among the alternatives. This paper extends their method to Importance Sampling. First, the designer estimates the prior probability density functions of the reliabilities of the alternative designs and calculates the expected utility of the choice of the best design. Subsequently, the designer estimates the likelihood function of the probability of failure by performing an initial simulation with Importance Sampling.
Journal Article

Warranty Forecasting of Repairable Systems for Different Production Patterns

2017-03-28
2017-01-0209
Warranty forecasting of repairable systems is very important for manufacturers of mass produced systems. It is desired to predict the Expected Number of Failures (ENF) after a censoring time using collected failure data before the censoring time. Moreover, systems may be produced with a defective component resulting in extensive warranty costs even after the defective component is detected and replaced with a new design. In this paper, we present a forecasting method to predict the ENF of a repairable system using observed data which is used to calibrate a Generalized Renewal Processes (GRP) model. Manufacturing of products may exhibit different production patterns with different failure statistics through time. For example, vehicles produced in different months may have different failure intensities because of supply chain differences or different skills of production workers, for example.
Technical Paper

Improving Low Frequency Torsional Vibrations NVH Performance through Analysis and Test

2007-05-15
2007-01-2242
Low frequency torsional vibrations can be a significant source of objectionable vehicle vibrations and in-vehicle boom, especially with changes in engine operation required for improved fuel economy. These changes include lower torque converter lock-up speeds and cylinder deactivation. This paper has two objectives: 1) Examine the effect of increased torsional vibrations on vehicle NVH performance and ways to improve this performance early in the program using test and simulation techniques. The important design parameters affecting vehicle NVH performance will be identified, and the trade-offs required to produce an optimized design will be examined. Also, the relationship between torsional vibrations and mount excursions, will be examined. 2) Investigate the ability of simulation techniques to predict and improve torsional vibration NVH performance. Evaluate the accuracy of the analytical models by comparison to test results.
X