Refine Your Search

Topic

Author

Search Results

Journal Article

A Re-Analysis Methodology for System RBDO Using a Trust Region Approach with Local Metamodels

2010-04-12
2010-01-0645
A simulation-based, system reliability-based design optimization (RBDO) method is presented that can handle problems with multiple failure regions and correlated random variables. Copulas are used to represent the correlation. The method uses a Probabilistic Re-Analysis (PRRA) approach in conjunction with a trust-region optimization approach and local metamodels covering each trust region. PRRA calculates very efficiently the system reliability of a design by performing a single Monte Carlo (MC) simulation per trust region. Although PRRA is based on MC simulation, it calculates “smooth” sensitivity derivatives, allowing therefore, the use of a gradient-based optimizer. The PRRA method is based on importance sampling. It provides accurate results, if the support of the sampling PDF contains the support of the joint PDF of the input random variables. The sequential, trust-region optimization approach satisfies this requirement.
Journal Article

Piston Design Using Multi-Objective Reliability-Based Design Optimization

2010-04-12
2010-01-0907
Piston design is a challenging engineering problem which involves complex physics and requires satisfying multiple performance objectives. Uncertainty in piston operating conditions and variability in piston design variables are inevitable and must be accounted for. The piston assembly can be a major source of engine mechanical friction and cold start noise, if not designed properly. In this paper, an analytical piston model is used in a deterministic and probabilistic (reliability-based) multi-objective design optimization process to obtain an optimal piston design. The model predicts piston performance in terms of scuffing, friction and noise, In order to keep the computational cost low, efficient and accurate metamodels of the piston performance metrics are used. The Pareto set of all optimal solutions is calculated allowing the designer to choose the “best” solution according to trade-offs among the multiple objectives.
Journal Article

A Comparative Benchmark Study of using Different Multi-Objective Optimization Algorithms for Restraint System Design

2014-04-01
2014-01-0564
Vehicle restraint system design is a difficult optimization problem to solve because (1) the nature of the problem is highly nonlinear, non-convex, noisy, and discontinuous; (2) there are large numbers of discrete and continuous design variables; (3) a design has to meet safety performance requirements for multiple crash modes simultaneously, hence there are a large number of design constraints. Based on the above knowledge of the problem, it is understandable why design of experiment (DOE) does not produce a high-percentage of feasible solutions, and it is difficult for response surface methods (RSM) to capture the true landscape of the problem. Furthermore, in order to keep the restraint system more robust, the complexity of restraint system content needs to be minimized in addition to minimizing the relative risk score to achieve New Car Assessment Program (NCAP) 5-star rating.
Journal Article

A New Metamodeling Approach for Time-Dependent Reliability of Dynamic Systems with Random Parameters Excited by Input Random Processes

2014-04-01
2014-01-0717
We propose a new metamodeling method to characterize the output (response) random process of a dynamic system with random parameters, excited by input random processes. The metamodel can be then used to efficiently estimate the time-dependent reliability of a dynamic system using analytical or simulation-based methods. The metamodel is constructed by decomposing the input random processes using principal components or wavelets and then using a few simulations to estimate the distributions of the decomposition coefficients. A similar decomposition is also performed on the output random process. A kriging model is then established between the input and output decomposition coefficients and subsequently used to quantify the output random process corresponding to a realization of the input random parameters and random processes. What distinguishes our approach from others in metamodeling is that the system input is not deterministic but random.
Journal Article

Enhancing Decision Topology Assessment in Engineering Design

2014-04-01
2014-01-0719
Implications of decision analysis (DA) on engineering design are important and well-documented. However, widespread adoption has not occurred. To that end, the authors recently proposed decision topologies (DT) as a visual method for representing decision situations and proved that they are entirely consistent with normative decision analysis. This paper addresses the practical issue of assessing the DTs of a designer using their responses. As in classical DA, this step is critical to encoding the DA's preferences so that further analysis and mathematical optimization can be performed on the correct set of preferences. We show how multi-attribute DTs can be directly assessed from DM responses. Furthermore, we show that preferences under uncertainty can be trivially incorporated and that topologies can be constructed using single attribute topologies similarly to multi-linear functions in utility analysis. This incremental construction simplifies the process of topology construction.
Journal Article

Uncertainty Assessment in Restraint System Optimization for Occupants of Tactical Vehicles

2016-04-05
2016-01-0316
We have recently obtained experimental data and used them to develop computational models to quantify occupant impact responses and injury risks for military vehicles during frontal crashes. The number of experimental tests and model runs are however, relatively small due to their high cost. While this is true across the auto industry, it is particularly critical for the Army and other government agencies operating under tight budget constraints. In this study we investigate through statistical simulations how the injury risk varies if a large number of experimental tests were conducted. We show that the injury risk distribution is skewed to the right implying that, although most physical tests result in a small injury risk, there are occasional physical tests for which the injury risk is extremely large. We compute the probabilities of such events and use them to identify optimum design conditions to minimize such probabilities.
Journal Article

Lightweight Stiffening Ribs in Structural Plates

2017-03-28
2017-01-0268
The aim of this analysis was to model the effect of adding stiffening ribs in structural aluminum components by friction stir processing (FSP) Nano material into the aluminum matrix. These stiffening ribs could dampen, redirect, or otherwise alter the transmission of energy waves created from automotive, ballistic, or blast shocks to improve noise, vibration, and harshness (NVH) and structural integrity (reduced joint stress) response. Since the ribs are not created by geometry changes they can be space efficient and deflect blast / ballistic energy better than geometry ribbing, resulting in a lighter weight solution. The blast and ballistic performance of different FSP rib patterns in AL 5182 and AL 7075 were simulated and compared to the performance of an equivalent weight of RHA plate FSP helps to increase localized strength and stiffness of the base metal, while achieving light weighting of the base metal.
Journal Article

A Variable-Size Local Domain Approach to Computer Model Validation in Design Optimization

2011-04-12
2011-01-0243
A common approach to the validation of simulation models focuses on validation throughout the entire design space. A more recent methodology validates designs as they are generated during a simulation-based optimization process. The latter method relies on validating the simulation model in a sequence of local domains. To improve its computational efficiency, this paper proposes an iterative process, where the size and shape of local domains at the current step are determined from a parametric bootstrap methodology involving maximum likelihood estimators of unknown model parameters from the previous step. Validation is carried out in the local domain at each step. The iterative process continues until the local domain does not change from iteration to iteration during the optimization process ensuring that a converged design optimum has been obtained.
Journal Article

Time-Dependent Reliability of Random Dynamic Systems Using Time-Series Modeling and Importance Sampling

2011-04-12
2011-01-0728
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to time-dependent operating conditions and material properties, component degradation, etc. The reliability degradation with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended function successfully for a specified time interval. In this work, we consider the first-passage reliability which accounts for the first time failure of non-repairable systems. Methods are available in the literature, which provide an upper bound to the true reliability which may overestimate the true value considerably. Monte-Carlo simulations are accurate but computationally expensive.
Journal Article

A Simulation and Optimization Methodology for Reliability of Vehicle Fleets

2011-04-12
2011-01-0725
Understanding reliability is critical in design, maintenance and durability analysis of engineering systems. A reliability simulation methodology is presented in this paper for vehicle fleets using limited data. The method can be used to estimate the reliability of non-repairable as well as repairable systems. It can optimally allocate, based on a target system reliability, individual component reliabilities using a multi-objective optimization algorithm. The algorithm establishes a Pareto front that can be used for optimal tradeoff between reliability and the associated cost. The method uses Monte Carlo simulation to estimate the system failure rate and reliability as a function of time. The probability density functions (PDF) of the time between failures for all components of the system are estimated using either limited data or a user-supplied MTBF (mean time between failures) and its coefficient of variation.
Journal Article

A Study of Anisotropy and Post-Necking Local Fracture Strain of Advanced High Strength Steel with the Utilization of Digital Image Correlation

2011-04-12
2011-01-0992
The automotive industry has a strong need for lightweight materials capable of withstanding large mechanical loads. Advanced high-strength steels (AHSS), which have high tensile strength and formability, show great promise for automotive applications, yet if they are to be more widely used, it's important to understand their deformation behavior; this is particularly important for the development of forming limit diagrams (FLD) used in stamping processes. The goal of the present study was to determine the extent to which anisotropy introduced by the rolling direction affects the local fracture strain. Three grades of dual-phase AHSS and one high-strength low-alloy (HSL A) 50ksi grade steel were tested under plane strain conditions. Half of the samples were loaded along their rolling direction and the other half transverse to it. In order to achieve plane strain conditions, non-standard dogbone samples were loaded on a wide-grip MTS tensile test machine.
Journal Article

Optimal Preventive Maintenance Schedule Based on Lifecycle Cost and Time-Dependent Reliability

2012-04-16
2012-01-0070
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. It also affects the scheduling for preventive maintenance. Reliability usually degrades with time increasing therefore, the lifecycle cost due to more frequent failures which result in increased warranty costs, costly repairs and loss of market share. In a lifecycle cost based design, we must account for product quality and preventive maintenance using time-dependent reliability. Quality is a measure of our confidence that the product conforms to specifications as it leaves the factory. For a repairable system, preventive maintenance is scheduled to avoid failures, unnecessary production loss and safety violations. This article proposes a methodology to obtain the optimal scheduling for preventive maintenance using time-dependent reliability principles.
Journal Article

System Topology Identification with Limited Test Data

2012-04-16
2012-01-0064
In this article we present an approach to identify the system topology using simulation for reliability calculations. The system topology provides how all components in a system are functionally connected. Most reliability engineering literature assumes that either the system topology is known and therefore all failure modes can be deduced or if the system topology is not known we are only interested in identifying the dominant failure modes. The authors contend that we should try to extract as much information about the system topology from failure or success information of a system as possible. This will not only identify the dominant failure modes but will also provide an understanding of how the components are functionally connected, allowing for more complicated analyses, if needed. We use an evolutionary approach where system topologies are generated at random and then tested against failure or success data. The topologies evolve based on how consistent they are with test data.
Journal Article

Quality Inspection of Spot Welds using Digital Shearography

2012-04-16
2012-01-0182
Spot Welding is an important welding technique which is widely used in automotive and aerospace industry. One of the keys of checking the quality of the welds is measuring the size of the nugget. In this paper, the Shearographic technique is utilized to test weld joint samples under the thermal loading condition. The goal is to identify the different group of the nuggets (i.e. small, middle, and large sizes, which indicate the quality of spot welds). In the experiments, the sample under test is fixed by a magnet method from behind at the four edges. Thermal loading was applied in the back side and the sample is inspected using the digital Shearographic system in the front side. Results show the great possibility of classifying the nugget size into three groups and the measurement is well repeatable.
Journal Article

A Nonparametric Bootstrap Approach to Variable-size Local-domain Design Optimization and Computer Model Validation

2012-04-16
2012-01-0226
Design optimization often relies on computational models, which are subjected to a validation process to ensure their accuracy. Because validation of computer models in the entire design space can be costly, a recent approach was proposed where design optimization and model validation were concurrently performed using a sequential approach with both fixed and variable-size local domains. The variable-size approach used parametric distributions such as Gaussian to quantify the variability in test data and model predictions, and a maximum likelihood estimation to calibrate the prediction model. Also, a parametric bootstrap method was used to size each local domain. In this article, we generalize the variable-size approach, by not assuming any distribution such as Gaussian. A nonparametric bootstrap methodology is instead used to size the local domains. We expect its generality to be useful in applications where distributional assumptions are difficult to verify, or not met at all.
Journal Article

Multi-Objective Decision Making under Uncertainty and Incomplete Knowledge of Designer Preferences

2011-04-12
2011-01-1080
Multi-attribute decision making and multi-objective optimization complement each other. Often, while making design decisions involving multiple attributes, a Pareto front is generated using a multi-objective optimizer. The end user then chooses the optimal design from the Pareto front based on his/her preferences. This seemingly simple methodology requires sufficient modification if uncertainty is present. We explore two kinds of uncertainties in this paper: uncertainty in the decision variables which we call inherent design problem (IDP) uncertainty and that in knowledge of the preferences of the decision maker which we refer to as preference assessment (PA) uncertainty. From a purely utility theory perspective a rational decision maker maximizes his or her expected multi attribute utility.
Journal Article

Warranty Forecasting of Repairable Systems for Different Production Patterns

2017-03-28
2017-01-0209
Warranty forecasting of repairable systems is very important for manufacturers of mass produced systems. It is desired to predict the Expected Number of Failures (ENF) after a censoring time using collected failure data before the censoring time. Moreover, systems may be produced with a defective component resulting in extensive warranty costs even after the defective component is detected and replaced with a new design. In this paper, we present a forecasting method to predict the ENF of a repairable system using observed data which is used to calibrate a Generalized Renewal Processes (GRP) model. Manufacturing of products may exhibit different production patterns with different failure statistics through time. For example, vehicles produced in different months may have different failure intensities because of supply chain differences or different skills of production workers, for example.
Technical Paper

Modeling Dependence and Assessing the Effect of Uncertainty in Dependence in Probabilistic Analysis and Decision Under Uncertainty

2010-04-12
2010-01-0697
A complete probabilistic model of uncertainty in probabilistic analysis and design problems is the joint probability distribution of the random variables. Often, it is impractical to estimate this joint probability distribution because the mechanism of the dependence of the variables is not completely understood. This paper proposes modeling dependence by using copulas and demonstrates their representational power. It also compares this representation with a Monte-Carlo simulation using dispersive sampling.
Technical Paper

GPU-based High Performance Parallel Simulation of Tracked Vehicle Operating on Granular Terrain

2010-04-12
2010-01-0650
This contribution demonstrates the use of high performance computing, specifically Graphics Processing Unit (GPU) based computing, for the simulation of tracked ground vehicles. The work closes a gap in physics based simulation related to the inability to accurately characterize the 3D mobility of tracked vehicles on granular terrains (sand and/or gravel). The problem of tracked vehicle mobility on granular material is approached using a discrete element method that accounts for the interaction between the track and each discrete particle in the terrain. This continuum approach captures the dynamics of systems with more than 1,000,000 bodies interacting simultaneously. Two factors render the approach feasible. First, the frictional contact problem between the terrain and the vehicle draws on a convex optimization methodology in which the solution becomes the first order optimality condition of a cone complementarity problem.
Technical Paper

Model-Based Embedded Controls Test and Verification

2010-04-12
2010-01-0487
Embedded systems continue to become more complex. As a result, more companies are utilizing model-based design (MBD) development methods and tools. The use of MBD methods and tools is helpful in increasing time to market and having instant feedback on the system design. One area that continues to mature is the testing and verification of the MBD systems. This paper introduces a hybrid approach to functional tests. The test system is composed of simulation software and real-time hardware. It is not always necessary to test a system in a real-time environment, but it is recommended if the goal is to deploy the system to a situation that requires real-time response. Vehicle drive cycles and powertrain control are utilized in this research as the example test case for this paper. In order to test the algorithms on a real-time system, it is necessary to understand the target controller's computing limitations and adjust the algorithms to meet these limitations.
X