Refine Your Search

Search Results

Viewing 1 to 7 of 7
Technical Paper

A Sparse Spatiotemporal Transformer for Detecting Driver Distracted Behaviors

2023-04-11
2023-01-0835
At present, the development of autonomous driving technology is still immature, and there is still a long way until fully driverless vehicles. Therefore, the state of the driver is still an important factor affecting traffic safety, and it is of great significance to detect the driver’s distracted behavior. In the task of driver distracted behavior detection, some characteristics of driver behavior in the cockpit can be further utilized to improve the detection performance. Compared with general human behaviors, driving behaviors are confined to enclosed space and are far less diverse. With this in mind, we propose a sparse spatiotemporal transformer which extracts local spatiotemporal features by segmenting the video at the low level of the model, and filters out local key spatiotemporal information associated with larger attention values based on the attention map in the middle layer, so as to enhance the high-level global semantic features.
Journal Article

A Visible and Infrared Fusion Based Visual Odometry for Autonomous Vehicles

2020-04-14
2020-01-0099
An accurate and timely positioning of the vehicle is required at all times for autonomous driving. The global navigation satellite system (GNSS), even when integrated with costly inertial measurement units (IMUs), would often fail to provide high-accuracy positioning due to GNSS-challenged environments such as urban canyons. As a result, visual odometry is proposed as an effective complimentary approach. Although it’s widely recognized that visual odometry should be developed based on both visible and infrared images to address issues such as frequent changes in ambient lightening conditions, the mechanism of visible-infrared fusion is often poorly designed. This study proposes a Generative Adversarial Network (GAN) based model comprises a generator, which aims to produce a fused image combining infrared intensities and visible gradients, and a discriminator whose target is to force the fused image to retain as many details that exist mostly in visible images as possible.
Technical Paper

Decision Making and Trajectory Planning of Intelligent Vehicle’ s Lane-Changing Behavior on Highways under Multi-Objective Constrains

2020-04-14
2020-01-0124
Discretionary lane changing is commonly seen in highway driving. Intelligent vehicles are expected to change lanes discretionarily for better driving experience and higher traffic efficiency. This study proposed to optimize the decision-making and trajectory-planning process so that intelligent vehicles made lane changes not only with driving safety taken into account, but also with the goal to improve driving comfort as well as to meet the driver’ s expectation. The mechanism of how various factors contribute to the driver’s intention to change lanes was studied by carrying out a series of driving simulation experiments, and a Lane-Changing Intention Generation (LCIG) model based on Bi-directional Long Short-Term Memory (Bi-LSTM) was proposed.
Technical Paper

Driver Distraction Detection with a Two-stream Convolutional Neural Network

2020-04-14
2020-01-1039
Driver distraction detection is crucial to driving safety when autonomous vehicles are co-piloted. Recognizing drivers’ behaviors that are highly related with distraction from real-time video stream is widely acknowledged as an effective approach mainly due to its non-intrusiveness. In recently years, deep learning neural networks have been adopted to by-pass the procedure of designing features artificially, which used to be the major downside of computer-vision based approaches. However, the detection accuracy and generalization ability is still not satisfying since most deep learning models extracts only spatial information contained in images. This research develops a driver distraction model based on a two-stream, spatial and temporal, convolutional neural network (CNN).
Technical Paper

Federated Learning Enable Training of Perception Model for Autonomous Driving

2024-04-09
2024-01-2873
For intelligent vehicles, a robust perception system relies on training datasets with a large variety of scenes. The architecture of federated learning allows for efficient collaborative model iteration while ensuring privacy and security by leveraging data from multiple parties. However, the local data from different participants is often not independent and identically distributed, significantly affecting the training effectiveness of autonomous driving perception models in the context of federated learning. Unlike the well-studied issues of label distribution discrepancies in previous work, we focus on the challenges posed by scene heterogeneity in the context of federated learning for intelligent vehicles and the inadequacy of a single scene for training multi-task perception models. In this paper, we propose a federated learning-based perception model training system.
Technical Paper

Intention-Aware Dual Attention Based Network for Vehicle Trajectory Prediction

2022-12-22
2022-01-7098
Accurate surrounding vehicle motion prediction is critical for enabling safe, high quality autonomous driving decision-making and motion planning. Aiming at the problem that the current deep learning-based trajectory prediction methods are not accurate and effective for extracting the interaction between vehicles and the road environment information, we design a target vehicle intention-aware dual attention network (IDAN), which establishes a multi-task learning framework combining intention network and trajectory prediction network, imposing dual constraints. The intention network generates an intention encoding representing the driver’s intention information. It inputs it into the attention module of the trajectory prediction network to assist the trajectory prediction network to achieve better prediction accuracy.
Technical Paper

LSTM-Based Trajectory Tracking Control for Autonomous Vehicles

2022-12-22
2022-01-7079
With the improvement of sensor accuracy, sensor data plays an increasingly important role in intelligent vehicle motion control. Good use of sensor data can improve the control of vehicles. However, data-based end-to-end control has the disadvantages of poorly interpreted control models and high time costs; model-based control methods often have difficulties designing high-fidelity vehicle controllers because of model errors and uncertainties in building vehicle dynamics models. In the face of high-speed steering conditions, vehicle control is difficult to ensure stability and safety. Therefore, this paper proposes a hybrid model and data-driven control method. Based on the vehicle state data and road information data provided by vehicle sensors, the method constructs a deep neural network based on LSTM and Attention, which is used as a compensator to solve the performance degradation of the LQR controller due to modeling errors.
X