Refine Your Search

Search Results

Viewing 1 to 3 of 3
Technical Paper

Handling Deviation for Autonomous Vehicles after Learning from Small Dataset

2018-04-03
2018-01-1091
Learning only from a small set of examples remains a huge challenge in machine learning. Despite recent breakthroughs in the applications of neural networks, the applicability of these techniques has been limited by the requirement for large amounts of training data. What’s more, the standard supervised machine learning method does not provide a satisfactory solution for learning new concepts from little data. However, the ability to learn enough information from few samples has been demonstrated in humans. This suggests that humans may make use of prior knowledge of a previously learned model when learning new ones on a small amount of training examples. In the area of autonomous driving, the model learns to drive the vehicle with training data from humans, and most machine learning based control algorithms require training on very large datasets. Collecting and constructing training data set takes a huge amount of time and needs specific knowledge to gather relevant information.
Technical Paper

Modeling and Learning of Object Placing Tasks from Human Demonstrations in Smart Manufacturing

2019-04-02
2019-01-0700
In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes.
Technical Paper

Prediction of Human Actions in Assembly Process by a Spatial-Temporal End-to-End Learning Model

2019-04-02
2019-01-0509
It’s important to predict human actions in the industry assembly process. Foreseeing future actions before they happened is an essential part for flexible human-robot collaboration and crucial to safety issues. Vision-based human action prediction from videos provides intuitive and adequate knowledge for many complex applications. This problem can be interpreted as deducing the next action of people from a short video clip. The history information needs to be considered to learn these relations among time steps for predicting the future steps. However, it is difficult to extract the history information and use it to infer the future situation with traditional methods. In this scenario, a model is needed to handle the spatial and temporal details stored in the past human motions and construct the future action based on limited accessible human demonstrations.
X