Refine Your Search

Search Results

Viewing 1 to 3 of 3
Technical Paper

Improved Joint Probabilistic Data Association Multi-target Tracking Algorithm Based on Camera-Radar Fusion

2021-04-15
2021-01-5002
A Joint Probabilistic Data Association (JPDA) multi-objective tracking improvement algorithm based on camera-radar fusion is proposed to address the problems of poor single-sensor tracking performance, unknown target detection probability, and missing valid targets in complex traffic scenarios. First, according to the correlation rule between the target track and the measurement, the correlation probability between the target and the measurement is obtained; then the measurement collection is divided into camera-radar measurement matched target, camera-only measurement matched target, radar-only measurement matched target, and no-match target; and the correlation probability is corrected with different confidence levels to avoid the use of unknown detection probability.
Technical Paper

Vehicle Detection Based on Deep Neural Network Combined with Radar Attention Mechanism

2020-12-29
2020-01-5171
In the autonomous driving perception task, the accuracy of target detection is an essential evaluation, especially for small targets. In this work, we propose a multi-sensor fusion neural network that combines radar and image data to improve the confidence level of the camera when detecting targets and the accuracy of the prediction box regression. The fusion network is based on the basic structure of single-shot multi-box detection (SSD). Inspired by the attention mechanism in image processing, our work incorporates the a priori knowledge of radar detection in the convolutional block attention module (CBAM), which forms a new attention mechanism module called radar convolutional block attention module (RCBAM). We add the RCBAM into the SSD target detection network to build a deep neural network fusing millimeter-wave radar and camera.
Technical Paper

Drivable Area Detection and Vehicle Localization Based on Multi-Sensor Information

2020-04-14
2020-01-1027
Multi-sensor information fusion framework is the eyes for unmanned driving and Advanced Driver Assistance System (ADAS) to perceive the surrounding environment. In addition to the perception of the surrounding environment, real-time vehicle localization is also the key and difficult point of unmanned driving technology. The disappearance of high-precision GPS signal suddenly and defect of the lane line will bring much more difficult and dangerous for vehicle localization when the vehicle is on unmanned driving. In this paper, a road boundary feature extraction algorithm is proposed based on multi-sensor information fusion of automotive radar and vision to realize the auxiliary localization of vehicles. Firstly, we designed a 79GHz (78-81GHz) Ultra-Wide Band (UWB) millimeter-wave radar, which can obtain the point cloud information of road boundary features such as guardrail or green belt and so on.
X