Refine Your Search

Search Results

Viewing 1 to 2 of 2
Technical Paper

Event-Triggered Model Predictive Control for Autonomous Vehicle with Rear Steering

2022-03-29
2022-01-0877
This paper proposes a new nonlinear model predictive control (NMPC) for autonomous vehicle path tracking problem. The vehicle is equipped with active rear steering, allowing independent control of front and rear steering. Traditional NMPC, which runs at fixed sampling rate, has been shown to provide satisfactory control performance in this problem. However, the high throughput of NMPC limits its implementation in production vehicle. To address this issue, we propose a novel event-triggered NMPC formulation, where the NMPC is triggered to run only when the actual states deviate from prediction beyond certain threshold. In other words, the event-triggered NMPC will formulate and solve a constrained optimal control problem only if it is enabled by a trigger event. When NMPC is not triggered, the optimal control sequence computed from last NMPC instance is shifted to determine the control action.
Technical Paper

RL-MPC: Reinforcement Learning Aided Model Predictive Controller for Autonomous Vehicle Lateral Control

2024-04-09
2024-01-2565
This paper presents a nonlinear model predictive controller (NMPC) coupled with a pre-trained reinforcement learning (RL) model that can be applied to lateral control tasks for autonomous vehicles. The past few years have seen opulent breakthroughs in applying reinforcement learning to quadruped, biped, and robot arm motion control; while these research extend the frontiers of artificial intelligence and robotics, control policy governed by reinforcement learning along can hardly guarantee the safety and robustness imperative to the technologies in our daily life because the amount of experience needed to train a RL model oftentimes makes training in simulation the only candidate, which leads to the long-standing sim-to-real gap problem–This forbids the autonomous vehicles to harness RL’s ability to optimize a driving policy by searching in a high-dimensional state space.
X