以視覺為基礎之示範學習與協作機器人系統
dc.contributor | 許陳鑑 | zh_TW |
dc.contributor | Hsu, Chen-Chien | en_US |
dc.contributor.author | 黃品叡 | zh_TW |
dc.contributor.author | Hwang, Pin-Jui | en_US |
dc.date.accessioned | 2023-12-08T07:47:23Z | |
dc.date.available | 2026-07-01 | |
dc.date.available | 2023-12-08T07:47:23Z | |
dc.date.issued | 2023 | |
dc.description.abstract | none | zh_TW |
dc.description.abstract | Robot arms have been widely used in many automated factories over the past decades. However, most conventional robots operate on the basis of pre-defined programs, limiting their responsiveness and adaptability to changes in the environment. When new tasks are deployed, weeks of reprogramming by robotic engineers/operators would be inevitable, with the detriment of downtime, high cost, and time consumption. To address this problem, this dissertation proposes a more intuitive way for robots to perform tasks through learning from human demonstration (LfD), based on two major components: understanding human behavior and reproducing the task by a robot. For the understanding of human behavior/intent, two approaches are presented. The first method uses a multi-action recognition carried out by an inflated 3D network (I3D) followed by a proposed statistically fragmented approach to enhance the action recognition results. The second method is a vision-based spatial-temporal action detection method to detect human actions focusing on meticulous hand movement in real time to establish an action base. For robot reproduction according to the descriptions in the action base, we integrate the sequence of actions in the action base and the key path derived by an object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. In addition to static industrial robot arms, collaborative robots (cobots) intended for human-robot interaction are playing an increasingly important role in intelligent manufacturing. Though promising for many industrial and home service applications, there are still issues to be solved for collaborative robots, including understanding human intention in a natural way, adaptability to execute tasks when the environment changes, and robot mobility to navigate around a working environment. Thus, this dissertation proposes a modularized solution for mobile collaborative robot systems, where the cobot equipped with a multi-camera localization scheme for self-localization can understand the human intention in a natural way via voice commands to execute the tasks instructed by the human operator in an unseen scenario when the environment changes. To validate the proposed approaches, comprehensive experiments are conducted and presented in this dissertation. | en_US |
dc.description.sponsorship | 電機工程學系 | zh_TW |
dc.identifier | 80775002H-43035 | |
dc.identifier.uri | https://etds.lib.ntnu.edu.tw/thesis/detail/232505a30dc4f720e7da87a16d0d9aa1/ | |
dc.identifier.uri | http://rportal.lib.ntnu.edu.tw/handle/20.500.12235/120354 | |
dc.language | 英文 | |
dc.subject | none | zh_TW |
dc.subject | robotic systems | en_US |
dc.subject | learning from demonstration | en_US |
dc.subject | action recognition | en_US |
dc.subject | object detection | en_US |
dc.subject | trajectory planning | en_US |
dc.subject | collaborative robots | en_US |
dc.subject | human-robot interaction | en_US |
dc.subject | robot localization | en_US |
dc.title | 以視覺為基礎之示範學習與協作機器人系統 | zh_TW |
dc.title | Vision-Based Learning from Demonstration and Collaborative Robotic Systems | en_US |
dc.type | etd |