The Dimensional Model of Driver Demand: Extension to Auditory-Vocal and Mixed-Mode Tasks 2016-01-1427
The Dimensional Model of Driver Demand is extended to include Auditory-Vocal (i.e., pure “voice” tasks), and Mixed-Mode tasks (i.e., a combination of Auditory-Vocal mode with visual-only, or with Visual-Manual modes). The extended model was validated with data from 24 participants using the 2014 Toyota Corolla infotainment system in a video-based surrogate driving venue. Twenty-two driver performance metrics were collected, including total eyes-off-road time (TEORT), mean single glance duration (MSGD), and proportion of long single glances (LGP). Other key metrics included response time (RT) and miss rate to a Tactile Detection Response Task (TDRT). The 22 metrics were simplified using Principal Component Analysis to two dimensions. The major dimension, explaining 60% of total variance, we interpret as the attentional effects of cognitive demand. The minor dimension, explaining 20% of total variance, we interpret as physical demand. The 22 metrics were well segmented into three groups, using cluster analysis, which independently agreed with the metric groups that were visually evident in the 2-D loadings plot. One cluster was associated with cognitive demand, and the other with physical demand. An additional cluster was discovered which loaded in an opposing fashion to cognitive demand along the cognitive demand dimension, and was called cognitive facilitation. The Dimensional Model, by eliminating the correlation between the task scores on the dimensions, also minimizes the biasing effects arising from interactions between metrics. The extended Dimensional Model allows for a simplified understanding of the effects of all modes of secondary tasks on driver performance.