AlphaZero演算法結合快贏策略或迫著空間實現於五子棋

No Thumbnail Available

Date

2020

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

AlphaZero是一個通用的強化式學習之演算法,除了遊戲規則外毫無人類知識,經過訓練後會有極佳的結果。為了要讓此架構在訓練初期,就能夠成功學習到五子棋所需的獲勝資訊,本研究展示了快贏策略(Quick Win)與迫著空間。 快贏策略旨在讓類神經網路學習到快贏的價值,並且在各走步勝率相同時,能更傾向選擇可以快速獲得勝利的走步;迫著空間則是針對盤面的迫著做搜索,讓能產生迫著走步的資訊被類神經網路學習,以縮短訓練時間。 本研究以四種不同的實驗方式,包含線性距離權重、指數距離權重、結合迫著搜尋於距離權重,以及結合迫著搜尋於蒙地卡羅樹搜索法的方式,觀察AlphaZero為設計基礎的人工智慧模型,在對弈時是否因為選擇了更快獲勝的棋局走步或學會形成迫著,而有效增強棋力。
AlphaZero is a generic reinforcement learning algorithm that achieved superior results after training, given no domain knowledge except the game rules. To get the similar results and let the neural network learn winning information of Gomoku in the beginning of the training, this thesis deals with Quick Win and Threats-space Search methods. Quick Win method aims to let the neural network learn how to win faster by choosing the fastest winning move when the walkable moves show the same winning possibilities. Threats-space Search method is to search for the threats for every move, letting the neural network learn how to create threats for shortening the training period. In this thesis, we demonstrate four kinds of experiments applied to Gomoku including linear distance weight, exponential distance weight, combining Threats-space Search with distance weight and combining Threats-space Search with Monte Carlo Tree Search. We observe whether the implementations based on AlphaZero algorithm effectively enhances the winning ability because of choosing a faster winning move or a threat move during the game.

Description

Keywords

AlpahZero, 類神經網路, 快贏策略, 迫著搜尋, AlpahZero, Neural Network, Quick Win, Threats-space Search

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By