Refine Your Search

Search Results

Viewing 1 to 9 of 9
Technical Paper

Hardware Implementation Details and Test Results for a High-Bandwith, Hydrostatic Transient Engine Dynamometer System

Transient operation of automobile engines is known to contribute significantly to regulated exhaust emissions, and is also an area of drivability concerns. Furthermore, many on-board diagnostic algorithms do not perform well during transient operation and are often temporarily disabled to avoid problems. The inability to quickly and repeatedly test engines during transient conditions in a laboratory setting limits researchers and development engineers ability to produce more effective and robust algorithms to lower vehicle emissions. To meet this need, members of the Powertrain Control Research Laboratory (PCRL) at the University of Wisconsin-Madison have developed a high-bandwidth, hydrostatic dynamometer system that will enable researchers to explore transient characteristics of engines and powertrains in the laboratory.
Technical Paper

Feature Extraction from Non-Linear Geometric Models in Design-for-Manufacturing

Automatic manufacturability analysis of injection moldings, sheet metal castings, stampings, forgings, etc., using knowledge-based heuristics depends on shape features, which are abstractions of the three dimensional (3D) geometric model of the parts. Conventional CAD systems do not explicitly contain shape feature information, therefore such information needs to be extracted from them. So far, extraction of shape features has been restricted to models with simple geometric shapes such as planar, cylindrical or conical shapes. Extending shape feature extraction to non-linear geometric models will allow Design For Manufacturability (DFM) analysis of non-linear models. This paper presents an approach to extract features from non-linear geometric models. The approach is based on abstract geometric entities called C-loops. The formation of a C-loop depends on a geometric entity called a silhouette. The C-loops are derived from the silhouette boundaries of an object.
Technical Paper

Development of a Self-Consistent Kinetic Plasma Model of Thermionic Energy Converters

The present work is aimed at developing a computational model of the interelectrode phenomena in thermionic energy converters which will be accurate over a very wide range of plasma conditions and operating modes. Previous models have achieved only moderate degrees of accuracy and, in a limited range, of validity. This limited range excludes a number of advanced thermionic devices, such as barium-cesium converters. The model under development promises improved accuracy in prediction of conventional devices and extension of predictive capability to advanced devices. The approach is to adapt the “Converted Scheme”, or CS method, to the cesium vapor plasma diode. This method, developed at the University of Wisconsin- Madison, is an extremely efficient algorithm for the solution of charged-particle kinetic equations and has been successfully used to simulate helium RF glow discharges.
Technical Paper

Optimization of an Asynchronous Fuel Injection System in Diesel Engines by Means of a Micro-Genetic Algorithm and an Adaptive Gradient Method

Optimal fuel injection strategies are obtained with a micro-genetic algorithm and an adaptive gradient method for a nonroad, medium-speed DI diesel engine equipped with a multi-orifice, asynchronous fuel injection system. The gradient optimization utilizes a fast-converging backtracking algorithm and an adaptive cost function which is based on the penalty method, where the penalty coefficient is increased after every line search. The micro-genetic algorithm uses parameter combinations of the best two individuals in each generation until a local convergence is achieved, and then generates a random population to continue the global search. The optimizations have been performed for a two pulse fuel injection strategy where the optimization parameters are the injection timings and the nozzle orifice diameters.
Technical Paper

Global Optimization of a Two-Pulse Fuel Injection Strategy for a Diesel Engine Using Interpolation and a Gradient-Based Method

A global optimization method has been developed for an engine simulation code and utilized in the search of optimal fuel injection strategies. This method uses a Lagrange interpolation function which interpolates engine output data generated at the vertices and the intermediate points of the input parameters. This interpolation function is then used to find a global minimum over the entire parameter set, which in turn becomes the starting point of a CFD-based optimization. The CFD optimization is based on a steepest descent method with an adaptive cost function, where the line searches are performed with a fast-converging backtracking algorithm. The adaptive cost function is based on the penalty method, where the penalty coefficient is increased after every line search. The parameter space is normalized and, thus, the optimization occurs over the unit cube in higher-dimensional space.
Technical Paper

Optimization of Diesel Engine Operating Parameters Using Neural Networks

Neural networks are useful tools for optimization studies since they are very fast, so that while capturing the accuracy of multi-dimensional CFD calculations or experimental data, they can be run numerous times as required by many optimization techniques. This paper describes how a set of neural networks trained on a multi-dimensional CFD code to predict pressure, temperature, heat flux, torque and emissions, have been used by a genetic algorithm in combination with a hill-climbing type algorithm to optimize operating parameters of a diesel engine over the entire speed-torque map of the engine. The optimized parameters are mass of fuel injected per cycle, shape of the injection profile for dual split injection, start of injection, EGR level and boost pressure. These have been optimized for minimum emissions. Another set of neural networks have been trained to predict the optimized parameters, based on the speed-torque point of the engine.
Technical Paper

Improvement of Neural Network Accuracy for Engine Simulations

Neural networks have been used for engine computations in the recent past. One reason for using neural networks is to capture the accuracy of multi-dimensional CFD calculations or experimental data while saving computational time, so that system simulations can be performed within a reasonable time frame. This paper describes three methods to improve upon neural network predictions. Improvement is demonstrated for in-cylinder pressure predictions in particular. The first method incorporates a physical combustion model within the transfer function of the neural network, so that the network predictions incorporate physical relationships as well as mathematical models to fit the data. The second method shows how partitioning the data into different regimes based on different physical processes, and training different networks for different regimes, improves the accuracy of predictions.
Technical Paper

Determination of Flame-Front Equivalence Ratio During Stratified Combustion

Combustion under stratified operating conditions in a direct-injection spark-ignition engine was investigated using simultaneous planar laser-induced fluorescence imaging of the fuel distribution (via 3-pentanone doped into the fuel) and the combustion products (via OH, which occurs naturally). The simultaneous images allow direct determination of the flame front location under highly stratified conditions where the flame, or product, location is not uniquely identified by the absence of fuel. The 3-pentanone images were quantified, and an edge detection algorithm was developed and applied to the OH data to identify the flame front position. The result was the compilation of local flame-front equivalence ratio probability density functions (PDFs) for engine operating conditions at 600 and 1200 rpm and engine loads varying from equivalence ratios of 0.89 to 0.32 with an unthrottled intake. Homogeneous conditions were used to verify the integrity of the method.
Technical Paper

Autonomous Vehicles in the Cyberspace: Accelerating Testing via Computer Simulation

We present an approach in which an open-source software infrastructure is used for testing the behavior of autonomous vehicles through computer simulation. This software infrastructure is called CAVE, from Connected Autonomous Vehicle Emulator. As a software platform that allows rapid, low-cost and risk-free testing of novel designs, methods and software components, CAVE accelerates and democratizes research and development activities in the field of autonomous navigation.