以視覺為基礎之示範學習與協作機器人系統

No Thumbnail Available

Date

2023

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

none
Robot arms have been widely used in many automated factories over the past decades. However, most conventional robots operate on the basis of pre-defined programs, limiting their responsiveness and adaptability to changes in the environment. When new tasks are deployed, weeks of reprogramming by robotic engineers/operators would be inevitable, with the detriment of downtime, high cost, and time consumption. To address this problem, this dissertation proposes a more intuitive way for robots to perform tasks through learning from human demonstration (LfD), based on two major components: understanding human behavior and reproducing the task by a robot. For the understanding of human behavior/intent, two approaches are presented. The first method uses a multi-action recognition carried out by an inflated 3D network (I3D) followed by a proposed statistically fragmented approach to enhance the action recognition results. The second method is a vision-based spatial-temporal action detection method to detect human actions focusing on meticulous hand movement in real time to establish an action base. For robot reproduction according to the descriptions in the action base, we integrate the sequence of actions in the action base and the key path derived by an object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. In addition to static industrial robot arms, collaborative robots (cobots) intended for human-robot interaction are playing an increasingly important role in intelligent manufacturing. Though promising for many industrial and home service applications, there are still issues to be solved for collaborative robots, including understanding human intention in a natural way, adaptability to execute tasks when the environment changes, and robot mobility to navigate around a working environment. Thus, this dissertation proposes a modularized solution for mobile collaborative robot systems, where the cobot equipped with a multi-camera localization scheme for self-localization can understand the human intention in a natural way via voice commands to execute the tasks instructed by the human operator in an unseen scenario when the environment changes. To validate the proposed approaches, comprehensive experiments are conducted and presented in this dissertation.

Description

Keywords

none, robotic systems, learning from demonstration, action recognition, object detection, trajectory planning, collaborative robots, human-robot interaction, robot localization

Citation

Collections