Refine Your Search

Search Results

Viewing 1 to 2 of 2
Technical Paper

Generation and Usage of Virtual Data for the Development of Perception Algorithms Using Vision

2016-04-05
2016-01-0170
Camera data generated in a 3D virtual environment has been used to train object detection and identification algorithms. 40 common US road traffic signs were used as the objects of interest during the investigation of these methods. Traffic signs were placed randomly alongside the road in front of a camera in a virtual driving environment, after the camera itself was randomly placed along the road at an appropriate height for a camera located on a vehicle’s rear view mirror. In order to best represent the real world, effects such as shadows, occlusions, washout/fade, skew, rotations, reflections, fog, rain, snow and varied illumination were randomly included in the generated data. Images were generated at a rate of approximately one thousand per minute, and the image data was automatically annotated with the true location of each sign within each image, to facilitate supervised learning as well as testing of the trained algorithms.
Technical Paper

Region Proposal Technique for Traffic Light Detection Supplemented by Deep Learning and Virtual Data

2017-03-28
2017-01-0104
In this work, we outline a process for traffic light detection in the context of autonomous vehicles and driver assistance technology features. For our approach, we leverage the automatic annotations from virtually generated data of road scenes. Using the automatically generated bounding boxes around the illuminated traffic lights themselves, we trained an 8-layer deep neural network, without pre-training, for classification of traffic light signals (green, amber, red). After training on virtual data, we tested the network on real world data collected from a forward facing camera on a vehicle. Our new region proposal technique uses color space conversion and contour extraction to identify candidate regions to feed to the deep neural network classifier. Depending on time of day, we convert our RGB images in order to more accurately extract the appropriate regions of interest and filter them based on color, shape and size. These candidate regions are fed to a deep neural network.
X