Refine Your Search

Search Results

Viewing 1 to 4 of 4
Technical Paper

Prediction of Human Actions in Assembly Process by a Spatial-Temporal End-to-End Learning Model

It’s important to predict human actions in the industry assembly process. Foreseeing future actions before they happened is an essential part for flexible human-robot collaboration and crucial to safety issues. Vision-based human action prediction from videos provides intuitive and adequate knowledge for many complex applications. This problem can be interpreted as deducing the next action of people from a short video clip. The history information needs to be considered to learn these relations among time steps for predicting the future steps. However, it is difficult to extract the history information and use it to infer the future situation with traditional methods. In this scenario, a model is needed to handle the spatial and temporal details stored in the past human motions and construct the future action based on limited accessible human demonstrations.
Technical Paper

Modeling and Learning of Object Placing Tasks from Human Demonstrations in Smart Manufacturing

In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes.
Technical Paper

Handling Deviation for Autonomous Vehicles after Learning from Small Dataset

Learning only from a small set of examples remains a huge challenge in machine learning. Despite recent breakthroughs in the applications of neural networks, the applicability of these techniques has been limited by the requirement for large amounts of training data. What’s more, the standard supervised machine learning method does not provide a satisfactory solution for learning new concepts from little data. However, the ability to learn enough information from few samples has been demonstrated in humans. This suggests that humans may make use of prior knowledge of a previously learned model when learning new ones on a small amount of training examples. In the area of autonomous driving, the model learns to drive the vehicle with training data from humans, and most machine learning based control algorithms require training on very large datasets. Collecting and constructing training data set takes a huge amount of time and needs specific knowledge to gather relevant information.
Technical Paper

Teaching Autonomous Vehicles How to Drive under Sensing Exceptions by Human Driving Demonstrations

Autonomous driving technologies can provide better safety, comfort and efficiency for future transportation systems. Most research in this area has mainly been focused on developing sensing and control approaches to achieve various autonomous driving functions. Very little of this research, however, has studied how to efficiently handle sensing exceptions. A simple exception measured by any of the sensors may lead to failures in autonomous driving functions. The autonomous vehicles are then supposed to be sent back to manufacturers for repair, which takes both time and money. This paper introduces an efficient approach to make human drivers able to online teach autonomous vehicles to drive under sensing exceptions. A human-vehicle teaching-and-learning framework for autonomous driving is proposed and the human teaching and vehicle learning processes for handling sensing exceptions in autonomous vehicles are designed in detail.