Refine Your Search

Search Results

Viewing 1 to 6 of 6
Technical Paper

Driver Drowsiness Behavior Detection and Analysis Using Vision-Based Multimodal Features for Driving Safety

2020-04-14
2020-01-1211
Driving inattention caused by drowsiness has been a significant reason for vehicle crash accidents, and there is a critical need to augment driving safety by monitoring driver drowsiness behaviors. For real-time drowsy driving awareness, we propose a vision-based driver drowsiness monitoring system (DDMS) for driver drowsiness behavior recognition and analysis. First, an infrared camera is deployed in-vehicle to capture the driver’s facial and head information in naturalistic driving scenarios, in which the driver may or may not wear glasses or sunglasses. Second, we propose and design a multi-modal features representation approach based on facial landmarks, and head pose which is retrieved in a convolutional neural network (CNN) regression model. Finally, an extreme learning machine (ELM) model is proposed to fuse the facial landmark, recognition model and pose orientation for drowsiness detection. The DDMS gives promptly warning to the driver once a drowsiness event is detected.
Technical Paper

Comfort Improvement for Autonomous Vehicles Using Reinforcement Learning with In-Situ Human Feedback

2022-03-29
2022-01-0807
In this paper, a reinforcement learning-based method is proposed to adapt autonomous vehicle passengers’ expectation of comfort through in-situ human-vehicle interaction. Ride comfort has a significant influence on the user’s experience and thus acceptance of autonomous vehicles. There is plenty of research about the motion planning and control of autonomous vehicles. However, limited studies have explicitly considered the comfort of passengers in autonomous vehicles. This paper studies the comfort of humans in autonomous vehicles longitudinal autonomous driving. The paper models and then improves passengers’ feelings about autonomous driving behaviors. This proposed approach builds a control and adaptation strategy based on reinforcement learning using human’s in-situ feedback on autonomous driving. It also proposes an adaptation of humans to autonomous vehicles to account for improper human driving expectations.
Technical Paper

Teaching Autonomous Vehicles How to Drive under Sensing Exceptions by Human Driving Demonstrations

2017-03-28
2017-01-0070
Autonomous driving technologies can provide better safety, comfort and efficiency for future transportation systems. Most research in this area has mainly been focused on developing sensing and control approaches to achieve various autonomous driving functions. Very little of this research, however, has studied how to efficiently handle sensing exceptions. A simple exception measured by any of the sensors may lead to failures in autonomous driving functions. The autonomous vehicles are then supposed to be sent back to manufacturers for repair, which takes both time and money. This paper introduces an efficient approach to make human drivers able to online teach autonomous vehicles to drive under sensing exceptions. A human-vehicle teaching-and-learning framework for autonomous driving is proposed and the human teaching and vehicle learning processes for handling sensing exceptions in autonomous vehicles are designed in detail.
Technical Paper

Handling Deviation for Autonomous Vehicles after Learning from Small Dataset

2018-04-03
2018-01-1091
Learning only from a small set of examples remains a huge challenge in machine learning. Despite recent breakthroughs in the applications of neural networks, the applicability of these techniques has been limited by the requirement for large amounts of training data. What’s more, the standard supervised machine learning method does not provide a satisfactory solution for learning new concepts from little data. However, the ability to learn enough information from few samples has been demonstrated in humans. This suggests that humans may make use of prior knowledge of a previously learned model when learning new ones on a small amount of training examples. In the area of autonomous driving, the model learns to drive the vehicle with training data from humans, and most machine learning based control algorithms require training on very large datasets. Collecting and constructing training data set takes a huge amount of time and needs specific knowledge to gather relevant information.
Technical Paper

Modeling and Learning of Object Placing Tasks from Human Demonstrations in Smart Manufacturing

2019-04-02
2019-01-0700
In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes.
Technical Paper

Prediction of Human Actions in Assembly Process by a Spatial-Temporal End-to-End Learning Model

2019-04-02
2019-01-0509
It’s important to predict human actions in the industry assembly process. Foreseeing future actions before they happened is an essential part for flexible human-robot collaboration and crucial to safety issues. Vision-based human action prediction from videos provides intuitive and adequate knowledge for many complex applications. This problem can be interpreted as deducing the next action of people from a short video clip. The history information needs to be considered to learn these relations among time steps for predicting the future steps. However, it is difficult to extract the history information and use it to infer the future situation with traditional methods. In this scenario, a model is needed to handle the spatial and temporal details stored in the past human motions and construct the future action based on limited accessible human demonstrations.
X