Browse Publications Technical Papers 2024-01-2999
2024-07-02

Automated AI-based Annotation Framework for 3D Object Detection from LIDAR Data in Industrial Areas. 2024-01-2999

Autonomous Driving is being utilized in various settings, including indoor areas such as industrial halls. Additionally, LIDAR sensors are currently popular due to their superior spatial resolution and accuracy compared to RADAR, as well as their robustness to varying lighting conditions compared to cameras. They enable precise and real-time perception of the surrounding environment. Several datasets for on-road scenarios such as KITTI or Waymo are publicly available. However, there is a notable lack of open-source datasets specifically designed for industrial hall scenarios, particularly for 3D LIDAR data. Furthermore, for industrial areas where vehicle platforms with omnidirectional drive are often used, 360° FOV LIDAR sensors are necessary to monitor all critical objects. Although high-resolution sensors would be optimal, mechanical LIDAR sensors with 360° FOV exhibit a significant price increase with increasing resolution. Most existing AI models for 3D object detection in point clouds are based on high-resolution LIDAR with many channels. This work aims to address these gaps by developing an automated AI-based labeling tool to generate 3D ground truth annotations for object detection from low-resolution LIDAR datasets captured in industrial hall scenarios. The point cloud data inside an industrial area at the KIT Campus Ost is recorded using a 16-channel LIDAR. The recorded objects include a forklift and box pallets for example. An upsampling LIDAR super-resolution approach is used that takes the recorded data as input for generating 64-channel point cloud data. The upsampled data is then utilized to fine-tune a 3D object detection model (Part-A2 net). Our testing results on a restricted dataset are highly promising, achieving a mean Average Precision of 95% at an IoU threshold of 0.75. The labeling tool is fully automated and utilizes the trained model for object detection. Manual corrections are also available. This research is part of the project FLOOW.

SAE MOBILUS

Subscribers can view annotate, and download all of SAE's content. Learn More »

Attention: This item is not yet published. Pre-Order to be notified, via email, when it becomes available.
Members save up to 16% off list price.
Login to see discount.
Special Offer: Download multiple Technical Papers each year? TechSelect is a cost-effective subscription option to select and download 12-100 full-text Technical Papers per year. Find more information here.
X