Browse Publications Technical Papers 2021-01-0088
2021-04-06

Predicting Desired Temporal Waypoints from Camera and Route Planner Images using End-To-Mid Imitation Learning 2021-01-0088

This study is focused on exploring the possibilities of using camera and route planner images for autonomous driving in an end-to-mid learning fashion. The overall idea is to clone the humans’ driving behavior, in particular, their use of vision for ‘driving’ and map for ‘navigating’. The notion is that we humans use our vision to ‘drive’ and sometimes, we also use a map such as Google/Apple maps to find direction in order to ‘navigate’. We replicated this notion by using end-to-mid imitation learning. In particular, we imitated human driving behavior by using camera and route planner images for predicting the desired waypoints and by using a dedicated control to follow those predicted waypoints. Besides, this work also places emphasis on using minimal and cheaper sensors such as camera and basic map for autonomous driving rather than expensive sensors such Lidar or HD Maps as we humans do not use such sophisticated sensors for driving. Also, even after decades of research, the reasonable place for ‘mid’ in the End-to-End approach, as well as, the trade-off between data-driven and math-based approach is not fully understood. Therefore, we focused on the end-to-mid learning approach and tried to identify the reasonable place for ‘mid’ in the end-to-end pipeline.

SAE MOBILUS

Subscribers can view annotate, and download all of SAE's content. Learn More »

Access SAE MOBILUS »

Members save up to 16% off list price.
Login to see discount.
Special Offer: Download multiple Technical Papers each year? TechSelect is a cost-effective subscription option to select and download 12-100 full-text Technical Papers per year. Find more information here.
X