藉由物件偵測與多動作識別之機器人演示學習系統
No Thumbnail Available
Date
2022
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
針對演示學習本論文提出了基於聚類分析的動作辨識法、機器人雙手臂即時模仿演示者架構和控制導向之單攝影機測距方法,整合出藉由物件偵測與多動作識別之機器人演示學習系統。本文中首先回顧了演示學習的關係定義,並且基於這個關係定義,探討其它文獻以及本文提出的演示學習。接著介紹硬體平台,使用ROBOTIS OP3作為擬人型機器人平台,以及使用Kinect v2偵測人體骨架點最後介紹與分析上述演算法。本論文提出的基於聚類分析的動作辨識法,相較於其它動作辨識較為單純,辨識上的運算主要將有限運動集與簇取歐式距離,這代表著辨識上可以有極快的優勢,以及可以自由增減辨識類型,無須因為動作類別的更動,而將全部資料重新訓練。在機器人雙手臂即時模仿演示者架構中,使用了基於鏈接向量和虛擬關節的幾何分析法搭配Kinect v2,完成現實中的演示者與機器人雙手臂的映射關係。在單攝影機測距,主要用途在於演示學習的觀測函數之一,該測距的穩定性為本文重點,也因此本文嘗試提出較為穩定之測距方法。最後本文針對演示學習系統將動作辨識、單攝影機測距與機器人雙手臂即時模仿作為觀測函數,並提出相關的決策推導,以完成演示學習系統。
For learning from demonstration (LfD), this thesis proposes a clustering analysis-based motion recognition method, a dual arm real-time imitation demonstrator architecture, and a single-camera distance measurement method for control guidance to integrate a robotic demonstration learning system with object detection and multi-motion recognition. This thesis firstly reviews the relationship definition of LfD. Based on this relationship definition, we discuss other literature and the LfD proposed in this thesis. Then, the hardware platform is using ROBOTIS OP3 as a humanoid robot platform and Kinect v2 to detect human skeleton points. This thesis proposes a clustering-based motion recognition method, which is relatively simple compared to other motion recognition methods. The computation of recognition calculates the distance between motion set and clusters, which means it can be faster than other methods. The recognition type can be freely increased or decreased without retraining all the data due to the change of motion types. The geometrical analysis based on link vectors and virtual joints (GA-LVVJ) method using Kinect v2 in the robot's dual arm real-time imitation demonstrator architecture to complete the mapping relationship between the real-time demonstrator and the robot's dual arm. The primary purpose of single-camera ranging is one of the observation functions for LfD, and the stability of the ranging is the focus of this thesis. Finally, in this thesis, motion recognition, single-camera ranging, and robot two-arm real-time imitation is used as observation functions for the demonstration learning system, and the related decision derivation is proposed to complete the demonstration learning system.
For learning from demonstration (LfD), this thesis proposes a clustering analysis-based motion recognition method, a dual arm real-time imitation demonstrator architecture, and a single-camera distance measurement method for control guidance to integrate a robotic demonstration learning system with object detection and multi-motion recognition. This thesis firstly reviews the relationship definition of LfD. Based on this relationship definition, we discuss other literature and the LfD proposed in this thesis. Then, the hardware platform is using ROBOTIS OP3 as a humanoid robot platform and Kinect v2 to detect human skeleton points. This thesis proposes a clustering-based motion recognition method, which is relatively simple compared to other motion recognition methods. The computation of recognition calculates the distance between motion set and clusters, which means it can be faster than other methods. The recognition type can be freely increased or decreased without retraining all the data due to the change of motion types. The geometrical analysis based on link vectors and virtual joints (GA-LVVJ) method using Kinect v2 in the robot's dual arm real-time imitation demonstrator architecture to complete the mapping relationship between the real-time demonstrator and the robot's dual arm. The primary purpose of single-camera ranging is one of the observation functions for LfD, and the stability of the ranging is the focus of this thesis. Finally, in this thesis, motion recognition, single-camera ranging, and robot two-arm real-time imitation is used as observation functions for the demonstration learning system, and the related decision derivation is proposed to complete the demonstration learning system.
Description
Keywords
演示學習, 基於聚類分析的動作辨識法, 基於鏈接向量和虛擬關節的幾何分析, 機器人雙手臂即時模仿演示者架構, 控制導向之單攝影機測距, learning from demonstration (LfD), action recognition method based on cluster analysis, geometrical analysis based on link vectors and virtual joints (GA-LLVJ), real-time motion following with dual arm, control-guided single-camera ranging