適用於陪伴型機器人與被陪伴者間互動之視覺式人體動作辨識系統

No Thumbnail Available

Date

2017

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

近年家用陪伴型機器人銷售量逐漸增加,而且價格也有逐漸降低的趨勢,愈來愈多家庭能夠負擔家用陪伴型機器人的費用。而家用陪伴型機器人主要功能為協助家人或照護者陪伴與照護幼童及年長者生活。家用陪伴型機器人可以從了解幼童及年長者的行為與狀態,做出適當的相對之回應,以達到互動、陪伴與照護之功能。本研究開發一套適用於陪伴型機器人與被陪伴者間互動之視覺式人體動作辨識系統,能夠自動辨識被陪伴者之動作,達到陪伴與照護之效果。 本系統開始時將讀入連續深度影像及連續彩色影像,接著判斷是否有人物在影像中,再利用深度影像建立depth motion map及彩色影像建立color motion map。將depth motion map與color motion map分別得到的影像合併成一張影像,將此影像擷取方向梯度直方圖(HOG)作為人體動作辨識系統之特徵。最後將這些特徵輸入SVM進行分類,得到人體動作辨識之結果。 本研究的人體動作辨識共分8種動作,分別為揮右手、揮左手、握右手、握左手、擁抱、鞠躬、走路及拳擊。Database1實驗資料由5位實驗者拍攝影片,每位實驗者分別拍攝8個動作,每個動作各執行20次,共有800部影片,其中以640部影片做為訓練集,另以160部影片做為測試集。由實驗結果可得知,本系統之人體動作辨識正確率為88.7%。Database2實驗資料由1位實驗者拍攝影片,其中實驗者為12歲之孩童,共有320部影片,皆作為測試集,實驗結果得知此人體動作辨識正確率為74.37%。Database3實驗資料為機器人移動時拍攝人體動作,由4位實驗者拍攝影片,共有320部影片,其中以160部影片作為訓練集,另以160部影片作為測試集,實驗結果得知人體動作辨識正確率為51.25%。此可知本系統的辨識結果具有一定可信度。
The companion robots can help people with special-care needs such as the elder, children, and the disabled. In recent years, demands and supplies of home companion robots are rapidly growing. In this study, a vision-based human action recognition system for companion robots and human interactive is developed. The aim is to produce a practical method for recognizing a set of commonly and socially acceptable behaviors. First, the Kinect 2.0 captures 3D-depth images and 2D-color images simultaneously by depth sensor and RGB camera. Second, the system uses 3D-depth motion map (3D-DMM) and the other color motion map as 4-dimensional sharp information based Histogram of Oriented Gradient (HOG) descriptor. In color-image processing, fuzzy skin detection method is used in body detection. Finally, the support vector machine (SVM) accepts HOG descriptor results. The SVM classifies the HOG descriptor and obtains the result of human action recognition. Experimental results show that the Database1 includs 800 sequences, the training data include 640 sequences, the testing data include 160 sequences, the average human action recognition accuracy rate of this study proposed method is 88.75 %. Database2 includs 320 sequences, the testing data include 320 sequences, the average human action recognition accuracy rate of this study proposed method is 74.37%. Database3 includs 320 sequences, the traning data include 160 sequences, the testing data include 160 sequences, the average human action recognition accuracy rate of this study proposed method is 51.25%.

Description

Keywords

人體動作辨識, Kinect 2.0 for Xbox One, 深度影像, 彩色影像, Support Vector Machine, human action recognition, Histogram of Oriented Gradient, Kinect 2.0 for Xbox One, depth image, color image, 3D depth motion map, Support Vector Machine, fuzzy skin detection, socially acceptable manner

Citation

Collections