Refine Your Search

Search Results

Viewing 1 to 4 of 4
Technical Paper

Cooperative Mandatory Lane Change for Connected Vehicles on Signalized Intersection Roads

2020-04-14
2020-01-0889
This paper presents a hierarchical control architecture to coordinate a group of connected vehicles on signalized intersection roads, where vehicles are allowed to change lane to follow a prescribed path. The proposed hierarchical control strategy consists of two control levels: a high level controller at the intersection and a decentralized low level controller in each car. In the hierarchical control architecture, the centralized intersection controller estimates the target velocity for each approaching connected vehicle to avoid red light stop based on the signal phase and timing (SPAT) information. Each connected vehicle as a decentralized controller utilizes model predictive control (MPC) to track the target velocity in a fuel efficient manner. The main objective in this paper is to consider mandatory lane changes. As in the realistic scenarios, vehicles are not required to drive in single lane. More specifically, they more likely change their lanes prior to signals.
Technical Paper

A Heuristic Supervisory Controller for a 48V Hybrid Electric Vehicle Considering Fuel Economy and Battery Aging

2019-01-15
2019-01-0079
Most studies on supervisory controllers of hybrid electric vehicles consider only fuel economy in the objective function. Taking into consideration the importance of the energy storage system health and its impact on the vehicle’s functionality, cost, and warranty, recent studies have included battery degradation as the second objective function by proposing different energy management strategies and battery life estimation methods. In this paper, a rule-based supervisory controller is proposed that splits the torque demand based not only on fuel consumption, but also on the battery capacity fade using the concept of severity factor. For this aim, the severity factor is calculated at each time step of a driving cycle using a look-up table with three different inputs including c-rate, working temperature, and state of charge of the battery. The capacity loss of the battery is then calculated using a semi-empirical capacity fade model.
Technical Paper

Real-Time Reinforcement Learning Optimized Energy Management for a 48V Mild Hybrid Electric Vehicle

2019-04-02
2019-01-1208
Energy management of hybrid vehicle has been a widely researched area. Strategies like dynamic programming (DP), equivalent consumption minimization strategy (ECMS), Pontryagin’s minimum principle (PMP) are well analyzed in literatures. However, the adaptive optimization work is still lacking, especially for reinforcement learning (RL). In this paper, Q-learning, as one of the model-free reinforcement learning method, is implemented in a mid-size 48V mild parallel hybrid electric vehicle (HEV) framework to optimize the fuel economy. Different from other RL work in HEV, this paper only considers vehicle speed and vehicle torque demand as the Q-learning states. SOC is not included for the reduction of state dimension. This paper focuses on showing that the EMS with non-SOC state vectors are capable of controlling the vehicle and outputting satisfactory results. Electric motor torque demand is chosen as action.
Technical Paper

Reinforcement Learning Based Fast Charging of Electric Vehicle Battery Packs

2023-10-31
2023-01-1681
Range anxiety and lack of adequate access to fast charging are proving to be important impediments to electric vehicle (EV) adoption. While many techniques to fast charging EV batteries (model-based & model-free) have been developed, they have focused on a single Lithium-ion cell. Extensions to battery packs are scarce, often considering simplified architectures (e.g., series-connected) for ease of modeling. Computational considerations have also restricted fast-charging simulations to small battery packs, e.g., four cells (for both series and parallel connected cells). Hence, in this paper, we pursue a model-free approach based on reinforcement learning (RL) to fast charge a large battery pack (comprising 444 cells). Each cell is characterized by an equivalent circuit model coupled with a second-order lumped thermal model to simulate the battery behavior. After training the underlying RL, the developed model will be straightforward to implement with low computational complexity.
X