A Multi-scale Fusion Obstacle Detection Algorithm for Autonomous
Driving Based on Camera and Radar 12-06-03-0022
This also appears in
SAE International Journal of Connected and Automated Vehicles-V132-12EJ
Effective circumstance perception technology is the prerequisite for the
successful application of autonomous driving, especially the detection
technology of traffic objects that affects other tasks such as driving decisions
and motion execution in autonomous vehicles. However, recent studies show that a
single sensor cannot perceive the surrounding environment stably and effectively
in complex circumstances. In the article, we propose a multi-scale feature
fusion framework that exploits a dual backbone network to extract camera and
radar feature maps and performs feature fusion on three different feature scales
using a new fusion module. In addition, we introduce a new generation mechanism
of radar projection images and relabel the nuScenes dataset since there is no
other suitable autonomous driving dataset for model training and testing. The
experimental results show that the fusion models achieve superior accuracy over
visual image-based models on the evaluation criteria of PASCAL visual object
classes (VOC) and Common Objects in Context (COCO), about 2% over the baseline
model (YOLOX).
Citation: He, S., Lin, C., and Hu, Z., "A Multi-scale Fusion Obstacle Detection Algorithm for Autonomous Driving Based on Camera and Radar," SAE Intl. J CAV 6(3):333-343, 2023, https://doi.org/10.4271/12-06-03-0022. Download Citation
Author(s):
Sihuang He, Chen Lin, Zhaohui Hu
Affiliated:
Hunan University, State Key Laboratory of Advanced Design and
Manufacturing for Vehicle Body, China
Pages: 12
ISSN:
2574-0741
e-ISSN:
2574-075X
Related Topics:
Autonomous vehicles
Mathematical models
Radar
Cameras
Sensors and actuators
Education and training
Volatile organic compounds
SAE MOBILUS
Subscribers can view annotate, and download all of SAE's content.
Learn More »