Semantic Segmentation with High Inference Speed in Off-Road Environments 2023-01-0868
Semantic segmentation is an integral component in many autonomous vehicle systems used for tasks like path identification and scene understanding. Autonomous vehicles must make decisions quickly enough so they can react to their surroundings, therefore, they must be able to segment the environment at high speeds. There has been a fair amount of research on semantic segmentation, but most of this research focuses on achieving higher accuracy, using the mean intersection over union (mIoU) metric rather than higher inference speed. More so, most of these semantic segmentation models are trained and evaluated on urban areas instead of off-road environments. Because of this there is a lack of knowledge in semantic segmentation models for use in off-road unmanned ground vehicles. In this research, SwiftNet, a semantic segmentation deep learning model designed for high inference speed and accuracy on images with large dimensions, was implemented and evaluated for inference speed of semantic segmentation of off-road environments. SwiftNet was pre-trained on the ImageNet dataset, then trained on 70% of the labeled images from the Rellis-3D dataset. Rellis-3D is an extensive off-road dataset designed for semantic segmentation, containing 6234 labeled 1920x1200 images. SwiftNet was evaluated using the remaining 30% of images from the Rellis-3D dataset and achieved an average inference speed of 24 frames per second (FPS) and an mIoU score 73.8% on a Titan RTX GPU.
Citation: Selee, B., Faykus, M., and Smith, M., "Semantic Segmentation with High Inference Speed in Off-Road Environments," SAE Technical Paper 2023-01-0868, 2023, https://doi.org/10.4271/2023-01-0868. Download Citation
Author(s):
Bradley Selee, Max Faykus, Melissa Smith
Affiliated:
Clemson University
Pages: 7
Event:
WCX SAE World Congress Experience
ISSN:
0148-7191
e-ISSN:
2688-3627
Related Topics:
Unmanned ground vehicles
Autonomous vehicles
Machine learning
SAE MOBILUS
Subscribers can view annotate, and download all of SAE's content.
Learn More »