Refine Your Search

Search Results

Viewing 1 of 1
Journal Article

Multi-task Learning of Semantics, Geometry and Motion for Vision-based End-to-End Self-Driving

2021-04-06
2021-01-0194
It’s hard to achieve complete self-driving using hand-crafting generalized decision-making rules, while the end-to-end self-driving system is low in complexity, does not require hand-crafting rules, and can deal with complex situations. Modular-based self-driving systems require multi-task fusion and high-precision maps, resulting in high system complexity and increased costs. In end-to-end self-driving, we usually only use camera to obtain scene status information, so image processing is very important. Numerous deep learning applications benefit from multi-task learning, as the multi-task learning can accelerate model training and improve accuracy with combine all tasks into one model, which reduces the amount of calculation and allows these systems to run in real-time. Therefore, the approach of obtaining rich scene state information based on multi-task learning is very attractive. In this paper, we propose an approach to multi-task learning for semantics, geometry and motion.
X