Refine Your Search

Search Results

Viewing 1 to 6 of 6
Technical Paper

Machine Learning Based Optimal Energy Storage Devices Selection Assistance for Vehicle Propulsion Systems

2020-04-14
2020-01-0748
This study investigates the use of machine learning methods for the selection of energy storage devices in military electrified vehicles. Powertrain electrification relies on proper selection of energy storage devices, in terms of chemistry, size, energy density, and power density, etc. Military vehicles largely vary in terms of weight, acceleration requirements, operating road environment, mission, etc. This study aims to assist the energy storage device selection for military vehicles using the data-drive approach. We use Machine Learning models to extract relationships between vehicle characteristics and requirements and the corresponding energy storage devices. After the training, the machine learning models can predict the ideal energy storage devices given the target vehicles design parameters as inputs. The predicted ideal energy storage devices can be treated as the initial design and modifications to that are made based on the validation results.
Technical Paper

Cooperative Mandatory Lane Change for Connected Vehicles on Signalized Intersection Roads

2020-04-14
2020-01-0889
This paper presents a hierarchical control architecture to coordinate a group of connected vehicles on signalized intersection roads, where vehicles are allowed to change lane to follow a prescribed path. The proposed hierarchical control strategy consists of two control levels: a high level controller at the intersection and a decentralized low level controller in each car. In the hierarchical control architecture, the centralized intersection controller estimates the target velocity for each approaching connected vehicle to avoid red light stop based on the signal phase and timing (SPAT) information. Each connected vehicle as a decentralized controller utilizes model predictive control (MPC) to track the target velocity in a fuel efficient manner. The main objective in this paper is to consider mandatory lane changes. As in the realistic scenarios, vehicles are not required to drive in single lane. More specifically, they more likely change their lanes prior to signals.
Technical Paper

A Heuristic Supervisory Controller for a 48V Hybrid Electric Vehicle Considering Fuel Economy and Battery Aging

2019-01-15
2019-01-0079
Most studies on supervisory controllers of hybrid electric vehicles consider only fuel economy in the objective function. Taking into consideration the importance of the energy storage system health and its impact on the vehicle’s functionality, cost, and warranty, recent studies have included battery degradation as the second objective function by proposing different energy management strategies and battery life estimation methods. In this paper, a rule-based supervisory controller is proposed that splits the torque demand based not only on fuel consumption, but also on the battery capacity fade using the concept of severity factor. For this aim, the severity factor is calculated at each time step of a driving cycle using a look-up table with three different inputs including c-rate, working temperature, and state of charge of the battery. The capacity loss of the battery is then calculated using a semi-empirical capacity fade model.
Technical Paper

Real-Time Reinforcement Learning Optimized Energy Management for a 48V Mild Hybrid Electric Vehicle

2019-04-02
2019-01-1208
Energy management of hybrid vehicle has been a widely researched area. Strategies like dynamic programming (DP), equivalent consumption minimization strategy (ECMS), Pontryagin’s minimum principle (PMP) are well analyzed in literatures. However, the adaptive optimization work is still lacking, especially for reinforcement learning (RL). In this paper, Q-learning, as one of the model-free reinforcement learning method, is implemented in a mid-size 48V mild parallel hybrid electric vehicle (HEV) framework to optimize the fuel economy. Different from other RL work in HEV, this paper only considers vehicle speed and vehicle torque demand as the Q-learning states. SOC is not included for the reduction of state dimension. This paper focuses on showing that the EMS with non-SOC state vectors are capable of controlling the vehicle and outputting satisfactory results. Electric motor torque demand is chosen as action.
Technical Paper

A Look-Ahead Model Predictive Optimal Control Strategy of a Waste Heat Recovery-Organic Rankine Cycle for Automotive Application

2019-04-02
2019-01-1130
The Organic Rankine Cycle (ORC) has proven to be a promising technology for Waste Heat Recovery (WHR) systems in heavy duty diesel engine applications. However, due to the highly transient heat source, controlling the working fluid flow through the ORC system is a challenge for real time application. With advanced knowledge of the heat source dynamics, there is potential to enhance power optimization from the WHR system through predictive optimal control. This paper proposes a look-ahead control strategy to explore the potential of increased power recovery from a simulated WHR system. In the look-ahead control, the future vehicle speed is predicted utilizing road topography and V2V connectivity. The forecasted vehicle speed is utilized to predict the engine speed and torque, which facilitates estimation of the engine exhaust conditions used in the ORC control model.
Technical Paper

Reinforcement Learning Based Fast Charging of Electric Vehicle Battery Packs

2023-10-31
2023-01-1681
Range anxiety and lack of adequate access to fast charging are proving to be important impediments to electric vehicle (EV) adoption. While many techniques to fast charging EV batteries (model-based & model-free) have been developed, they have focused on a single Lithium-ion cell. Extensions to battery packs are scarce, often considering simplified architectures (e.g., series-connected) for ease of modeling. Computational considerations have also restricted fast-charging simulations to small battery packs, e.g., four cells (for both series and parallel connected cells). Hence, in this paper, we pursue a model-free approach based on reinforcement learning (RL) to fast charge a large battery pack (comprising 444 cells). Each cell is characterized by an equivalent circuit model coupled with a second-order lumped thermal model to simulate the battery behavior. After training the underlying RL, the developed model will be straightforward to implement with low computational complexity.
X