Refine Your Search

Search Results

Viewing 1 to 2 of 2
Technical Paper

Additional Findings on the Multi-Modal Demands of “Voice-Command” Interfaces

2016-04-05
2016-01-1428
This paper presents the results of a study of how people interacted with a production voice-command based interface while driving on public roadways. Tasks included phone contact calling, full address destination entry, and point-of-interest (POI) selection. Baseline driving and driving while engaging in multiple-levels of an auditory-vocal cognitive reference task and manual radio tuning were used as comparison points. Measures included self-reported workload, task performance, physiological arousal, glance behavior, and vehicle control for an analysis sample of 48 participants (gender balanced across ages 21-68). Task analysis and glance measures confirm earlier findings that voice-command interfaces do not always allow the driver to keep their hands on the wheel and eyes on the road, as some assume.
Technical Paper

A Framework for Robust Driver Gaze Classification

2016-04-05
2016-01-1426
The challenge of developing a robust, real-time driver gaze classification system is that it has to handle difficult edge cases that arise in real-world driving conditions: extreme lighting variations, eyeglass reflections, sunglasses and other occlusions. We propose a single-camera end-toend framework for classifying driver gaze into a discrete set of regions. This framework includes data collection, semi-automated annotation, offline classifier training, and an online real-time image processing pipeline that classifies the gaze region of the driver. We evaluate an implementation of each component on various subsets of a large onroad dataset. The key insight of our work is that robust driver gaze classification in real-world conditions is best approached by leveraging the power of supervised learning to generalize over the edge cases present in large annotated on-road datasets.
X