Browse Publications Technical Papers 2018-01-0494

Methodologies for Evaluating and Optimizing Multimodal Human-Machine-Interface of Autonomous Vehicles 2018-01-0494

With the rapid development of artificial intelligence, autonomous driving technology will finally reshape an automotive industry. Although fully autonomous cars are not commercially available to common consumers at this stage, partially autonomous vehicles, which are defined as level 2 and level 3 autonomous vehicles by SAE J3016 standard, are widely tested by automakers and researchers.
A typical Human-Machine-Interface (HMI) for a vehicle takes a form to support a human domination role. Although modern driving assistance systems allow vehicles to take over control at certain scenarios, the typical human-machine-interface has not changed dramatically for a long time. With deep learning neural network technologies penetrating into automotive applications, multi-modal communications between a driver and a vehicle can be enabled by a cost-effective solution. The multi-modal human-machine-interface will allow a driver to easily interact with autonomous vehicles, supporting smooth switching between human manual control and automation. However, unlike a steering wheel of vehicles, there is no normal or standard multi-modal human-machine-interface for autonomous vehicles. Moreover, unlike buttons and knobs, which cause little confusions in applications across different countries, multi-modal communications are affected by cultural nuances. Automotive Original Equipment Manufacturers (OEMs) can promote the typical human-machine-interface in different countries or automotive markets with little adaption, but OMEs need to adjust multi-modal human-machine-interface by taking into consideration cultural impacts, driving habits, social cognition and a traffic legal system. Design methodologies for human-machine-interface systems on different level autonomous vehicles are elaborated.
The goal of multi-modal human-machine-interface in partially driving autonomous vehicles (SAE level 2 autonomous vehicles) and conditionally driving autonomous vehicles (SAE level 3 autonomous vehicles) is not only to mitigate a driver’s fatigue during driving, but also to keep certain amount of the driver’s engagement to ensure that the driver can take over control in a short time when the switching is necessary. The two sides of the design goal of multi-modal HMI systems are actually on a trade-off curve. Methodologies to optimize multi-modal communications to support HMI design are elaborated and compared in this paper.


Subscribers can view annotate, and download all of SAE's content. Learn More »


Members save up to 16% off list price.
Login to see discount.
Special Offer: Download multiple Technical Papers each year? TechSelect is a cost-effective subscription option to select and download 12-100 full-text Technical Papers per year. Find more information here.
We also recommend:

Parameter Estimation of Non-Paved Roads for ICVs Using 3D Point Clouds


View Details


Validated Specification through Simulation for Complex Electronic Modules


View Details


Region Proposal Technique for Traffic Light Detection Supplemented by Deep Learning and Virtual Data


View Details