Development of a Camera-Based Driver State Monitoring System for Cost-Effective Embedded Solution 2020-01-1210
To prevent the severe consequences of unsafe driving behaviors, it is crucial to monitor and analyze the state of the driver. Developing an effective driver state monitoring (DSM) systems is particularly challenging due to limited computation capabilities of embedded systems in automobiles and the need for finishing processing in real-time. However, most of the existing research work was conducted in a lab environment with expensive equipment while lacking in-car benchmarking and validation. In this paper, a DSM system that estimates driver's alertness and drowsiness level as well as performs emotion detection built with a cost-effective embedded system is presented. The proposed system consists of a mono camera that captures driver's facial image in real-time and a machine learning based detection algorithm that detects facial landmark points and use that information to infer driver's state. In the detection module, driver's distraction level is evaluated by estimating head-pose through solving a perspective-n-point problem, drowsiness level is estimated by processing eyelid related parameters extracted from facial keypoints data, and machine learning approach was used for emotion state monitoring. It is discovered for emotion recognition that using multilayer perceptron (MLP) network reached the accuracy of 93% on DISFA (Denver Intensity of Spontaneous Facial Action) dataset when combined with Action Units (AU) analysis. The performance of the developed DSM system has been verified in both laboratory and in-vehicle condition, and the experimental results showed its effectiveness in both normal and low lighting conditions. Moreover, the performance of the developed algorithm using commercially available low-cost cameras as well as memory and processing speed analysis considering the required embedded system design has also been examined.