Panoptic Based Camera and Lidar Fusion for Distance Estimation in
Autonomous Driving Vehicles 2022-28-0307
Position estimation of the surrounding objects seen by the sensors mounted on an
autonomous vehicle is a key module and it is typically carried out with the
camera-lidar fusion owing to the high accuracy in depth estimation from lidar
point cloud. In typical automotive LIDAR with 64 scanner points or less, at
distances above 100 m, the object detection with LIDAR is not dependable as the
number of LIDAR clusters will be sparse, while the high-resolution camera can
offer better detection even at the distances above 100 m. Calculation of the
position can be best achieved if there is a reliable means to get the
corresponding LIDAR points for the detection in camera. To address this, we are
proposing a novel a grid-based approach, in which a grid is created in the point
cloud by calculating object’s position derived from camera detections. The
correspondence between Camera pixels and LIDAR point cloud tends to suffer when
the object of interest is occluded (eg.by other vehicles, guard rails, poles) or
when there are false detections from the camera object detection module (eg. due
to mirror reflections). Our proposed approach is a novel grid-based approach
based on fusion of camera object detection and panoptic segmentation, which is
then associated with lidar point cloud data and lidar object detections for
accurate distance estimation. We take into consideration the occlusion level of
the camera detected objects with the help of panoptic segmentation of the image
frames and only the lidar points corresponding to the actual visible points of
the object is used further for fusion and distance estimation. The panoptic
segmentation provides both instance and semantic segmentation and helps in
identifying the visible points during an occlusion of similar class of objects.
This approach helped in removing the lidar points of the static and background
objects projected on the camera detection bounding boxes, which in turn helped
in identifying valid clusters for distance estimation in the fusion algorithm.
Estimated positions from the camera 2D detections are then associated with the
lidar detections by introducing closest Euclidean distance. We evaluated the
algorithm in a custom dataset and observed 28% increase in recall rates compared
to the lidar fusion using camera object detection alone approach.
Citation: Jose, E., P, A., Patil, M., Thayyil Ravi, A. et al., "Panoptic Based Camera and Lidar Fusion for Distance Estimation in Autonomous Driving Vehicles," SAE Technical Paper 2022-28-0307, 2022, https://doi.org/10.4271/2022-28-0307. Download Citation
Author(s):
Edwin Jose, Aparna M P, Mrinalini Patil, Arunkrishna Thayyil Ravi, Manoj Rajan
Affiliated:
Tata Consultancy Services
Event:
10TH SAE India International Mobility Conference
ISSN:
0148-7191
e-ISSN:
2688-3627
Related Topics:
Autonomous vehicles
Imaging and visualization
Lidar
Sensors and actuators
Mathematical models
Mirrors
SAE MOBILUS
Subscribers can view annotate, and download all of SAE's content.
Learn More »