Training of Neural Networks with Automated Labeling of Simulated Sensor Data 2019-01-0120
While convolutional neural networks (CNNs) have revolutionized ground-vehicle in autonomy in the last decade, this class of algorithms requires large, truth-labeled data sets to be trained. Initially used to process images, CNNs are now used in almost every part of robotic software development. Nevertheless, many of the freely available software libraries for training CNNs require image inputs. In this work, we present a novel method for rapidly training CNNs for autonomous driving that utilizes physics-based simulation of sensors, along with automated truth labeling, to improve the speed and accuracy of training data acquisition for both camera and LIDAR sensors. This framework is enabled by the MSU Autonomous Vehicle Simulator (MAVS), a physics-based sensor simulator for ground vehicle robotics that includes high-fidelity simulations of LIDAR, camera, and other sensors.
Chris Goodin, Daniel Carruth, Matthew Doude, Christopher Hudson, Lalitha Dabbiru, Suvash Sharma