Refine Your Search

Search Results

Viewing 1 to 7 of 7
Technical Paper

4D Radar-Inertial SLAM based on Factor Graph Optimization

2024-04-09
2024-01-2844
SLAM (Simultaneous Localization and Mapping) plays a key role in autonomous driving. Recently, 4D Radar has attracted widespread attention because it breaks through the limitations of 3D millimeter wave radar and can simultaneously detect the distance, velocity, horizontal azimuth and elevation azimuth of the target with high resolution. However, there are few studies on 4D Radar in SLAM. In this paper, RI-FGO, a 4D Radar-Inertial SLAM method based on Factor Graph Optimization, is proposed. The RANSAC (Random Sample Consensus) method is used to eliminate the dynamic obstacle points from a single scan, and the ego-motion velocity is estimated from the static point cloud. A 4D Radar velocity factor is constructed in GTSAM to receive the estimated velocity in a single scan as a measurement and directly integrated into the factor graph. The 4D Radar point clouds of consecutive frames are matched as the odometry factor.
Technical Paper

A Novelty Multitarget-Multisensor Tracking Algorithm with Out of Sequence Measurements for Automated Driving System on Highway Condition

2023-12-20
2023-01-7041
Automated driving system is a multi-source sensor data fusion system. However different type sensor has different operating frequencies, different field of view, different detection capabilities and different sensor data transition delay. Aiming at these problems, this paper introduces the processing mechanism of out of sequence measurement data into the multi-target detection and tracking system based on millimeter wave radar and camera. After the comparison of ablation experiments, the longitudinal and lateral tracking performance of the fusion system is improved in different distance ranges.
Technical Paper

Performance Limitations Analysis of Visual Sensors in Low Light Conditions Based on Field Test

2022-12-22
2022-01-7086
Visual sensors are widely used in autonomous vehicles (AVs) for object detection due to the advantages of abundant information and low-cost. But the performance of visual sensors is highly affected by low light conditions when AVs driving at nighttime and in the tunnel. The low light conditions decrease the image quality and the performance of object detection, and may cause safety of the intended functionality (SOTIF) problems. Therefore, to analyze the performance limitations of visual sensors in low light conditions, a controlled light experiment on a proving ground is designed. The influences of low light conditions on the two-stage algorithm and the single-stage algorithm are compared and analyzed quantificationally by constructing an evaluation index set from three aspects of missing detection, classification, and positioning accuracy.
Technical Paper

Lane Marking Detection for Highway Scenes based on Solid-state LiDARs

2021-12-15
2021-01-7008
Lane marking detection plays a crucial role in Autonomous Driving Systems or Advanced Driving Assistance System. Vision based lane marking detection technology has been well discussed and put into practical application. LiDAR is more stable for challenging environment compared to cameras, and with the development of LiDAR technology, price and lifetime are no longer an issue. We propose a lane marking detection algorithm based on solid-state LiDARs. First a series of data pre-processing operations were done for the solid-state LiDARs with small field of view, and the needed ground points are extracted by the RANSAC method. Then, based on the OTSU method, we propose an approach for extracting lane marking points using intensity information.
Technical Paper

LiDAR-Based High-Accuracy Parking Slot Search, Detection, and Tracking

2020-12-29
2020-01-5168
The accuracy of parking slot detection is a challenge for the safety of the Automated Valet Parking (AVP), while traditional methods of range sensor-based parking slot detection have mostly focused on the detection rate in a scenario, where the ego-vehicle must pass by the slot. This paper uses three-dimensional Light Detection And Ranging (3D LiDAR) to efficiently search parking slots around without passing by them and highlights the accuracy of detecting and tracking. For this purpose, a universal process of 3D LiDAR-based high-accuracy slot perception is proposed in this paper. First, the method Minimum Spanning Tree (MST) is applied to sort obstacles, and Separating Axis Theorem (SAT) are applied to the bounding boxes of obstacles in the bird’s-eye view, to find a free space between two adjacent obstacles. These bounding boxes are obtained by using common point cloud processing methods.
Technical Paper

IMM-KF Algorithm for Multitarget Tracking of On-Road Vehicle

2020-04-14
2020-01-0117
Tracking vehicle trajectories is essential for autonomous vehicles and advanced driver-assistance systems to understand traffic environment and evaluate collision risk. In order to reduce the position deviation and fluctuation of tracking on-road vehicle by millimeter-wave radar (MMWR), an interactive multi-model Kalman filter (IMM-KF) tracking algorithm including data association and track management is proposed. In general, it is difficult to model the target vehicle accurately due to lack of vehicle kinematics parameters, like wheel base, uncertainty of driving behavior and limitation of sensor’s field of view. To handle the uncertainty problem, an interacting multiple model (IMM) approach using Kalman filters is employed to estimate multitarget’s states. Then the compensation of radar ego motion is achieved, since the original measurement is under the radar polar coordinate system.
Technical Paper

Study on Target Tracking Based on Vision and Radar Sensor Fusion

2018-04-03
2018-01-0613
Faced with intricate traffic conditions, the single sensor has been unable to meet the safety requirements of Advanced Driver Assistance Systems (ADAS) and autonomous driving. In the field of multi-target tracking, the number of targets detected by vision sensor is sometimes less than the current tracks while the number of targets detected by millimeter wave radar is more than the current tracks. Hence, a multi-sensor information fusion algorithm is presented by utilizing advantage of both vision sensor and millimeter wave radar. The multi-sensor fusion algorithm is based on centralized fusion strategy that the fusion center takes a unified track management. At First, vision sensor and radar are used to detect the target and to measure the range and the azimuth angle of the target. Then, the detections data from vision sensor and radar is transferred to fusion center to join the multi-target tracking with the prediction of current tracks.
X