Refine Your Search

Search Results

Viewing 1 to 2 of 2
Technical Paper

Capability-Driven Adaptive Task Distribution for Flexible Multi-Human-Multi-Robot (MH-MR) Manufacturing Systems

2020-04-14
2020-01-1303
Collaborative robots are more and more used in smart manufacturing because of their capability to work beside and collaborate with human workers. With the deployment of these robots, manufacturing tasks are more inclined to be accomplished by multiple humans and multiple robots (MH-MR) through teaming effort. In such MH-MR collaboration scenarios, the task distribution among the multiple humans and multiple robots is very critical to efficiency. It is also more challenging due to the heterogeneity of different agents. Existing approaches in task distribution among multiple agents mostly consider humans with assumed or known capabilities. However human capabilities are always changing due to various factors, which may lead to suboptimal efficiency. Although some researches have studied several human factors in manufacturing and applied them to adjust the robot task and behaviors.
Technical Paper

Prediction of Human Actions in Assembly Process by a Spatial-Temporal End-to-End Learning Model

2019-04-02
2019-01-0509
It’s important to predict human actions in the industry assembly process. Foreseeing future actions before they happened is an essential part for flexible human-robot collaboration and crucial to safety issues. Vision-based human action prediction from videos provides intuitive and adequate knowledge for many complex applications. This problem can be interpreted as deducing the next action of people from a short video clip. The history information needs to be considered to learn these relations among time steps for predicting the future steps. However, it is difficult to extract the history information and use it to infer the future situation with traditional methods. In this scenario, a model is needed to handle the spatial and temporal details stored in the past human motions and construct the future action based on limited accessible human demonstrations.
X