Refine Your Search

Topic

Search Results

Technical Paper

Improving Robust Design with Preference Aggregation Methods

2004-03-08
2004-01-1140
Robust design is a methodology for improving the quality of a product or process by minimizing the effect of variations in the inputs without eliminating the causes of those variations. In robust design, the putative best design is obtained by solving a multi-criteria optimization problem, trading off the nominal performance against the minimization of the variation of the performance measure. Because some existing methods combine the two criteria with a weighted sum or another fixed aggregation strategy, which are known to miss Pareto points, they may fail to obtain a desired design. To overcome this inadequacy, a more comprehensive preference aggregation method is implemented here into robust design. Three examples -- one simple mathematical example, one multi-criteria structure design example, and one automotive example -- are presented to illustrate the effectiveness of the proposed method.
Technical Paper

Propagation of Epistemic Uncertainty for Design Reuse

2004-03-08
2004-01-1141
There are two sorts of uncertainty inherent in engineering design, the random and the epistemic. Random, or stochastic, uncertainty deals with the randomness or predictability of an event. It is well understood, easily modeled using classical probability, and ideal for such uncertainties as variations in manufacturing processes or material properties. Epistemic uncertainty deals with our lack of knowledge, our lack of information, and our own and others' subjectivity concerning design parameters. Epistemic uncertainty plays a particularly important role in the early stages of engineering design, when a lack of information about nominal values of parameters is much more important than potential variations in those parameters. Design reuse, or the design of product platforms, is an example in which epistemic uncertainty can play a crucial role in early design.
Technical Paper

Probabilistic Computations for the Main Bearings of an Operating Engine Due to Variability in Bearing Properties

2004-03-08
2004-01-1143
This paper presents the development of surrogate models (metamodels) for evaluating the bearing performance in an internal combustion engine. The metamodels are employed for performing probabilistic analyses for the engine bearings. The metamodels are developed based on results from a simulation solver computed at a limited number of sample points, which sample the design space. An integrated system-level engine simulation model, consisting of a flexible crankshaft dynamics model and a flexible engine block model connected by a detailed hydrodynamic lubrication model, is employed in this paper for generating information necessary to construct the metamodels. An optimal symmetric latin hypercube algorithm is utilized for identifying the sampling points based on the number and the range of the variables that are considered to vary in the design space.
Journal Article

Piston Design Using Multi-Objective Reliability-Based Design Optimization

2010-04-12
2010-01-0907
Piston design is a challenging engineering problem which involves complex physics and requires satisfying multiple performance objectives. Uncertainty in piston operating conditions and variability in piston design variables are inevitable and must be accounted for. The piston assembly can be a major source of engine mechanical friction and cold start noise, if not designed properly. In this paper, an analytical piston model is used in a deterministic and probabilistic (reliability-based) multi-objective design optimization process to obtain an optimal piston design. The model predicts piston performance in terms of scuffing, friction and noise, In order to keep the computational cost low, efficient and accurate metamodels of the piston performance metrics are used. The Pareto set of all optimal solutions is calculated allowing the designer to choose the “best” solution according to trade-offs among the multiple objectives.
Journal Article

On the Time-Dependent Reliability of Non-Monotonic, Non-Repairable Systems

2010-04-12
2010-01-0696
The system response of many engineering systems depends on time. A random process approach is therefore, needed to quantify variation or uncertainty. The system input may consist of a combination of random variables and random processes. In this case, a time-dependent reliability analysis must be performed to calculate the probability of failure within a specified time interval. This is known as cumulative probability of failure which is in general, different from the instantaneous probability of failure. Failure occurs if the limit state function becomes negative at least at one instance within a specified time interval. Time-dependent reliability problems appear if for example, the material properties deteriorate in time or if random loading is involved which is modeled by a random process. Existing methods to calculate the cumulative probability of failure provide an upper bound which may grossly overestimate the true value.
Journal Article

Time-Dependent Reliability Estimation for Dynamic Systems Using a Random Process Approach

2010-04-12
2010-01-0644
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to time-dependent operating conditions and material properties, component degradation, etc. The reliability degradation with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended function successfully for a specified time interval. In this work, we consider the first-passage reliability which accounts for the first time failure of non-repairable systems. Methods are available in the literature, which provide an upper bound to the true reliability which may overestimate the true value considerably. This paper proposes a methodology to calculate the cumulative probability of failure (probability of first passage or upcrossing) of a dynamic system, driven by an ergodic input random process.
Technical Paper

Modeling and Optimization of Vehicle Drivetrain Dynamic Performance Considering Uncertainty

2005-05-16
2005-01-2371
A vehicle drivetrain is designed to meet specific vehicle performance criteria which usually involve trade-offs among conflicting performance measures. This paper describes a methodology to optimize the drivetrain design including the axle ratio, transmission shift points and transmission shift ratios considering uncertainty. A complete vehicle dynamic model is developed using the bond graph method. The model includes the vehicle, engine, transmission, torque converter, driveline, and transmission controller. An equivalent MATLAB Simulink model is also developed in order to carry out the nonlinear dynamic analysis efficiently. A deterministic optimization is first performed to determine the optimum design in terms of fuel economy, without considering variations or uncertainties. Subsequently, a Reliability-Based Design Optimization is carried out to find the optimum design in the presence of uncertainty.
Technical Paper

Design Optimization and Reliability Estimation with Incomplete Uncertainty Information

2006-04-03
2006-01-0962
Existing methods for design optimization under uncertainty assume that a high level of information is available, typically in the form of data. In reality, however, insufficient data prevents correct inference of probability distributions, membership functions, or interval ranges. In this article we use an engine design example to show that optimal design decisions and reliability estimations depend strongly on uncertainty characterization. We contrast the reliability-based optimal designs to the ones obtained using worst-case optimization, and ask the question of how to obtain non-conservative designs with incomplete uncertainty information. We propose an answer to this question through the use of Bayesian statistics. We estimate the truck's engine reliability based only on available samples, and demonstrate that the accuracy of our estimates increases as more samples become available.
Technical Paper

Monte Carlo Simulation of Overstress Probe Testing for Fatigue Strength

2006-04-03
2006-01-1335
The overstress probe fatigue testing method, although codified to characterize fatigue strength, is poorly understood. While it yields data confirming whether minimum fatigue strength may be met, it does not directly reveal the mean fatigue strength. Procedures for conducting the test are somewhat arbitrary and rely on fitting a 3-parameter Weibull model. In this paper, a Monte Carlo procedure is developed to simulate the overstress probe test. The effect of various parameters used in the test is also discussed. A comparison is made between Weibull and Gaussian models. Suggestions for conducting the overstress probe test are provided.
Technical Paper

Design Optimization Under Uncertainty Using Evidence Theory

2006-04-03
2006-01-0388
Early in the engineering design cycle, it is difficult to quantify product reliability due to insufficient data or information to model uncertainties. Probability theory can not be therefore, used. Design decisions are usually, based on fuzzy information which is imprecise and incomplete. Recently, evidence theory has been proposed to handle uncertainty with limited information. In this paper, a computationally efficient design optimization method is proposed based on evidence theory, which can handle a mixture of epistemic and random uncertainties. It quickly identifies the vicinity of the optimal point and the active constraints by moving a hyper-ellipse in the original design space, using a reliability-based design optimization (RBDO) algorithm. Subsequently, a derivative-free optimizer calculates the evidence-based optimum, starting from the close-by RBDO optimum, considering only the identified active constraints.
Technical Paper

Simulation of Tire-Snow Interfacial Forces for a Range of Snow Densities with Uncertainty

2006-04-03
2006-01-0497
The objective of this paper is to assess the effect of snow density on tire-snow interaction in the presence of uncertainty. The snow-depth dependent finite element analysis (FEA) and semi-analytical models we have developed recently can predict tire-snow interfacial forces at a given density under combined slip conditions. One drawback of the models is that they are only applicable for fresh, low-density snow due to the unavailability of a density-dependent snow model. In reality, the snow density on the ground can vary between that of fresh snow to heavily compacted snow that is similar to ice. Even for fresh snow on the ground, as a vehicle moves forward, the rear wheels experience higher snow densities than the front wheels. In addition, being a natural material, snow's physical properties vary significantly even for the same density.
Technical Paper

Sensitivity Study of Probit and Two-Point Fatigue Testing Methods

2006-04-03
2006-01-0536
Fatigue strength mean and standard deviation may be estimated by the Probit and 2-Point test methods. In this paper, methodologies for conducting the tests are developed and results from Monte Carlo simulation are presented. The results are compared with those from concurrent testing with the staircase method. While the Probit and 2-Point methods are intuitively attractive, their results are significantly different from those from the staircase method. The latter remains the best of the three.
Technical Paper

An Efficient Possibility-Based Design Optimization Method for a Combination of Interval and Random Variables

2007-04-16
2007-01-0553
Reliability-based design optimization accounts for variation. However, it assumes that statistical information is available in the form of fully defined probabilistic distributions. This is not true for a variety of engineering problems where uncertainty is usually given in terms of interval ranges. In this case, interval analysis or possibility theory can be used instead of probability theory. This paper shows how possibility theory can be used in design and presents a computationally efficient sequential optimization algorithm. The algorithm handles problems with only uncertain or a combination of random and uncertain design variables and parameters. It consists of a sequence of cycles composed of a deterministic design optimization followed by a set of worst-case reliability evaluation loops. A crank-slider mechanism example demonstrates the accuracy and efficiency of the proposed sequential algorithm.
Technical Paper

A Time-Dependent Reliability Analysis Method using a Niching Genetic Algorithm

2007-04-16
2007-01-0548
A reliability analysis method is presented for time-dependent systems under uncertainty. A level-crossing problem is considered where the system fails if its maximum response exceeds a specified threshold. The proposed method uses a double-loop optimization algorithm. The inner loop calculates the maximum response in time for a given set of random variables, and transforms a time-dependent problem into a time-independent one. A time integration method is used to calculate the response at discrete times. For each sample function of the response random process, the maximum response is found using a global-local search method consisting of a genetic algorithm (GA), and a gradient-based optimizer. This dynamic response usually exhibits multiple peaks and crosses the allowable response level to form a set of complex limit states, which lead to multiple most probable points (MPPs).
Technical Paper

Probabilistic Analysis for the Performance Characteristics of Engine Bearings due to Variability in Bearing Properties

2003-05-05
2003-01-1733
This paper presents the development of surrogate models (metamodels) for evaluating the bearing performance in an internal combustion engine without performing time consuming analyses. The metamodels are developed based on results from actual simulation solvers computed at a limited number of sample points, which sample the design space. A finite difference bearing solver is employed in this paper for generating information necessary to construct the metamodels. An optimal symmetric Latin hypercube algorithm is utilized for identifying the sampling points based on the number and the range of the variables that are considered to vary in the design space. The development of the metamodels is validated by comparing results from the metamodels with results from the actual bearing performance solver over a large number of evaluation points. Once the metamodels are established they are employed for performing probabilistic analyses.
Technical Paper

Sensitivity Study of Staircase Fatigue Tests Using Monte Carlo Simulation

2005-04-11
2005-01-0803
The staircase fatigue test method is a well-established, but poorly understood probe for determining fatigue strength mean and standard deviation. The sensitivity of results to underlying distributions was studied using Monte Carlo simulation by repeatedly sampling known distributions of hypothetical fatigue strength data with the staircase test method. In this paper, the effects of the underlying distribution on staircase test results are presented with emphasis on original normal, lognormal, Weibull and bimodal data. The results indicate that the mean fatigue strength determined by the staircase testing protocol is largely unaffected by the underlying distribution, but the standard deviation is not. Suggestions for conducting staircase tests are provided.
Technical Paper

System Reliability-Based Design using a Single-Loop Method

2007-04-16
2007-01-0555
An efficient approach for series system reliability-based design optimization (RBDO) is presented. The key idea is to apportion optimally the system reliability among the failure modes by considering the target values of the failure probabilities of the modes as design variables. Critical failure modes that contribute the most to the overall system reliability are identified. This paper proposes a computationally efficient, system RBDO approach using a single-loop method where the searches for the optimum design and for the most probable failure points proceed simultaneously. Specifically, at each iteration the optimizer uses approximated most probable failure points from the previous iteration to search for the optimum. A second-order Ditlevsen upper bound is used for the joint failure probability of failure modes. Also, an easy to implement active strategy set is employed to improve algorithmic stability.
Technical Paper

Optimal Engine Torque Management for Reducing Driveline Clunk Using Time - Dependent Metamodels

2007-05-15
2007-01-2236
Quality and performance are two important customer requirements in vehicle design. Driveline clunk negatively affects the perceived quality and must be therefore, minimized. This is usually achieved using engine torque management, which is part of engine calibration. During a tip-in event, the engine torque rate of rise is limited until all the driveline lash is taken up. However, the engine torque rise, and its rate can negatively affect the vehicle throttle response. Therefore, the engine torque management must be balanced against throttle response. In practice, the engine torque rate of rise is calibrated manually. This paper describes a methodology for calibrating the engine torque in order to minimize the clunk disturbance, while still meeting throttle response constraints. A set of predetermined engine torque profiles are calibrated in a vehicle and the transmission turbine speed is measured for each profile. The latter is used to quantify the clunk disturbance.
Technical Paper

An Efficient Re-Analysis Methodology for Vibration of Large-Scale Structures

2007-05-15
2007-01-2326
Finite element analysis is a well-established methodology in structural dynamics. However, optimization and/or probabilistic studies can be prohibitively expensive because they require repeated FE analyses of large models. Various reanalysis methods have been proposed in order to calculate efficiently the dynamic response of a structure after a baseline design has been modified, without recalculating the new response. The parametric reduced-order modeling (PROM) and the combined approximation (CA) methods are two re-analysis methods, which can handle large model parameter changes in a relatively efficient manner. Although both methods are promising by themselves, they can not handle large FE models with large numbers of DOF (e.g. 100,000) with a large number of design parameters (e.g. 50), which are common in practice. In this paper, the advantages and disadvantages of the PROM and CA methods are first discussed in detail.
Technical Paper

Piston Secondary Dynamics Considering Elastohydrodynamic Lubrication

2007-04-16
2007-01-1251
An analytical method is presented in this paper for simulating piston secondary dynamics and piston-bore contact for an asymmetric half piston model including elastohydrodynamic (EHD) lubrication at the bore-skirt interface. A piston EHD analysis is used based on a finite-difference formulation. The oil film is discretized using a two-dimensional mesh. For improved computational efficiency without loss of accuracy, the Reynolds’ equation is solved using a perturbation approach which utilizes an “influence zone” concept, and a successive over-relaxation solver. The analysis includes several important physical attributes such as bore distortion effects due to mechanical and thermal deformation, inertia loading and piston barrelity and ovality. A Newmark-Beta time integration scheme combined with a Newton-Raphson linearization, calculates the piston secondary motion.
X