Modeling and Learning of Object Placing Tasks from Human Demonstrations in Smart Manufacturing 2019-01-0700
In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the semantic event chain (SEC) is implemented to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) is used to determine the relative position and orientation between the object the corresponding mount. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the SEC, while the relative position and orientation of the object and the corresponding mount are presented by GVD. When the object or the relative position and orientation between the object and the workpiece are changed, the GVD, as well as the shape of contours extracted from the GVD, are also different. The Fourier Descriptor (FD) is applied to describe these differences on the shape of contours in the GVD. An FD-based similarity measurement algorithm is proposed to identify the similarity between different GVD. In the implementation phase, the placing motion for the robot is planned by searching for an SEC and GVD based on a given task configuration, which is to maximize the similarity of SEC and GVD with respect to the demonstrations in the learning phase. In the robot experiment with an ABB Yumi, we show that the robot can appropriately accomplish the various placing tasks as demonstrated by humans.
Yi Chen, Weitian Wang, Zhujun Zhang, Venkat N Krovi, Yunyi Jia
Automotive Engineering, Clemson Univ, Harbin Institute of Technology