Rain-Adaptive Intensity-Driven Object Detection for Autonomous Vehicles 2020-01-0091
Deep learning based approaches for object detection are heavily dependent on the nature of data used for training, especially for vehicles driving in cluttered urban environments. Consequently, the performance of Convolutional Neural Network (CNN) architectures designed and trained using data captured under clear weather and favorable conditions, could degrade rather significantly when tested under cloudy and rainy conditions. This naturally becomes a major safety issue for emerging autonomous vehicle platforms relying on CNN based object detection methods. Furthermore, despite a noticeable progress in the development of advanced visual deraining algorithms, they still have inherent limitations for improving the performance of state-of-the-art object detection. In this paper, we address this problem area by make the following contributions. We systematically study and quantify the influence of a wide range of rain intensities on the performance of popular deep-learning based object detection that is trained with clear visual data. We show that even low rain intensities could significantly degrade the performance of object detection trained using clear visuals. Subsequently, we propose a Rain-Adaptive Intensity-Driven (RAID) deep learning framework for object detection under a variety of rain intensities. Controlled experiments based on rain simulations, which are seamlessly integrated with real visual data captured by moving vehicles in truly cluttered urban environments, show the superiority of the proposed RAID framework as compared with state-of-the-art deraining methods in conjunction with popular deep learning based object detection.