基於深度強化學習之移動大型重物

dc.contributor包傑奇zh_TW
dc.contributorJacky Baltesen_US
dc.contributor.author許哲菡zh_TW
dc.contributor.authorHanjaya Mandalaen_US
dc.date.accessioned2020-12-14T08:53:28Z
dc.date.available2020-07-27
dc.date.available2020-12-14T08:53:28Z
dc.date.issued2020
dc.description.abstractnonezh_TW
dc.description.abstractHumanoid robots are designed and expected to work alongside a human. In our daily life, Moving Large Size and Heavy Objects (MLHO) can be considered as a problem that is a common activity and dangerous to humans. In this thesis, we propose a novel hierarchical learning-based algorithm, which we use dragging to transport an object on an adult-sized humanoid robot. The proposed method proves robustness on a THORMANG-Wolf adult-sized humanoid robot, that manages to drag a massive object with a mass of double of its weight (84.6 kg) for 2 meters. Therefore, the algorithms consist of three hierarchical deep learning-based algorithms to solve the MLHO problem and distributed in terms of robot vision and behavior control. Based on this insight, in the robot vision control, first, we propose deep learning algorithms to 3D object classification and surface detection. For 3D object classification, we propose a Three-layers Convolution Volumetric Network (TCVN). Input data of the TCVN model used a voxel grid representation from point clouds data acquired from the robot’s LiDAR scanner. On the other hand, for surface detection, we propose a lightweight real-time instance segmentation called Tiny-YOLACT (You Only Look at Coefficients) to segment the floor from the robot’s camera. Tiny-YOLACT model is adopted from the YOLACT model and utilized ResNet-18 model as the backbone network. Furthermore, for robot behavior control, as the main part of this thesis we address solving MLHO problem by an adult-sized humanoid robot using the deep reinforcement learning algorithm for the first time. At this part, we proposed a Deep Q-Learning algorithm to train a deep model for control policy in offsetting the Centre of Body (CoB) of the robot when dragging different objects named (DQL-COB). For this purpose, the offset CoB is implemented to keep tracking with the robot’s center of mass. As a result, the robot can keep balance with maintaining the ZMP in the support polygon. DQL-COB algorithm was first trained on the ROS Gazebo simulator to avoid costly experiments in terms of time and real environment constraints, then it was adopted with a real robot on three different types of surfaces. To evaluate the stability of the THORMANG-Wolf robot with the proposed methods, we evaluated two types of experiments on three types of surfaces with eight different objects. In these experiments, in one scenario we use IMU along with foot Pressure (F/T) sensor, in the second scenario we just use IMU data as learning algorithm input. In the experiments, the success rates of applying the DQL-COB algorithm on the real robot are 92.91% with using the F/T sensor and 83.75% without using F/T sensors. Moreover, the TCVN model on 3D object classifications achieved a 90% accuracy in real-time. Correspondingly, the Tiny-YOLACT model achieved a 34.16 mAP on validation data with an average of 29.56 fps on a single NVIDIA GTX-1060 GPU.en_US
dc.description.sponsorship電機工程學系zh_TW
dc.identifierG060775042H
dc.identifier.urihttp://etds.lib.ntnu.edu.tw/cgi-bin/gs32/gsweb.cgi?o=dstdcdr&s=id=%22G060775042H%22.&
dc.identifier.urihttp://rportal.lib.ntnu.edu.tw:80/handle/20.500.12235/110778
dc.language英文
dc.subject人形機器人zh_TW
dc.subject深度強化學習zh_TW
dc.subject拖動物件zh_TW
dc.subject深度學習zh_TW
dc.subjecthumanoid roboten_US
dc.subjectdeep reinforcement learningen_US
dc.subjectdragging objecten_US
dc.subjectdeep learningen_US
dc.title基於深度強化學習之移動大型重物zh_TW
dc.titleMoving Large Size and Heavy Object with Deep Reinforcement Learningen_US

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
060775042h01.pdf
Size:
5.42 MB
Format:
Adobe Portable Document Format

Collections