基於深度學習的車輛隨意網路路由協定
No Thumbnail Available
Date
2018
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
車輛隨意網路 (Vehicular Ad-hoc Network, VANET) 能提供許多智慧車輛的應用以及智慧交通系統 (Intelligence Traffic System, ITS) 所需的網路基礎。藉由車輛之間的封包交換達到傳遞訊息的作用,可應用於行車安全、路況警示或是駕駛輔助系統。車輛隨意網路具有節點高速移動、拓樸改變快速等性質,加上道路環境複雜和訊號干擾的問題,如何使封包能夠可靠地成功送達目的地,成為路由在車輛隨意網路上的主要研究領域。
此研究提出深度強化學習車輛網路路由協定 (Deep Reinforcement Learning Routing for VANET, vDRL),類似於以位置為基礎的路由協定,並且不需要仰賴於任何路由規則,藉由強化學習 (Reinforcement Learning) 的泛化能力,使其足以適應不同環境與車輛的特色。實驗結果顯示在大多數不同的情境設定中,vDRL相較於貪婪邊界無狀態路由(Greedy Perimeter Stateless Routing, GPSR) ,不僅提高封包的送達成功率、也降低端點對端點的延遲,以及路由所需的節點數。除此之外,此研究也提出一個有效的流程架構,藉由導入不同的街道地圖與真實車流量資訊,並使用強化學習訓練出最佳化的路由協定。
In Intelligent Transport System (ITS), smart vehicle applications such as collision and road hazard warnings provide a safer and smarter driving environment. For safety applications, information is often exchanged between vehicle-to-vehicle (V2V). This type of fundamental network infrastructure is called Vehicular Ad-hoc Network (VANET). The main difference between VANET and Mobile Ad-hoc Network (MANET) is the highly dynamic characteristic of network topology due to high mobility of vehicles. This characteristic presents greater challenges for the VANET’s routing protocol to achieve high successful packet deliver ratio while reduce end-to-end delay and overhead. Thus, designing an efficient routing protocol is one of the active research topics in VANET. In this research, we proposed Deep Reinforcement Learning Routing for VANET (vDRL) to address the above-mentioned problem. Similar to position-awareness routing protocols, the location vehicles are used in the proposed vDRL. However, the reinforcement learning is applied for the next hop selection in the vDRL. Unlike other routing protocols in VANET, vDRL does not required fix routing rules which allows it to adapt the highly dynamic vehicle network environment. In addition, a network simulator is implemented that combines with reinforcement learning and neural network model. The simulator is able to generate variety of maps with different streets and traffic model for training the routing protocols to adapt different scenarios. The experiment results shown the proposed vDRL routing protocol is able to achieve high deliver rate and low delay with low overhead.
In Intelligent Transport System (ITS), smart vehicle applications such as collision and road hazard warnings provide a safer and smarter driving environment. For safety applications, information is often exchanged between vehicle-to-vehicle (V2V). This type of fundamental network infrastructure is called Vehicular Ad-hoc Network (VANET). The main difference between VANET and Mobile Ad-hoc Network (MANET) is the highly dynamic characteristic of network topology due to high mobility of vehicles. This characteristic presents greater challenges for the VANET’s routing protocol to achieve high successful packet deliver ratio while reduce end-to-end delay and overhead. Thus, designing an efficient routing protocol is one of the active research topics in VANET. In this research, we proposed Deep Reinforcement Learning Routing for VANET (vDRL) to address the above-mentioned problem. Similar to position-awareness routing protocols, the location vehicles are used in the proposed vDRL. However, the reinforcement learning is applied for the next hop selection in the vDRL. Unlike other routing protocols in VANET, vDRL does not required fix routing rules which allows it to adapt the highly dynamic vehicle network environment. In addition, a network simulator is implemented that combines with reinforcement learning and neural network model. The simulator is able to generate variety of maps with different streets and traffic model for training the routing protocols to adapt different scenarios. The experiment results shown the proposed vDRL routing protocol is able to achieve high deliver rate and low delay with low overhead.
Description
Keywords
車輛隨意網路, 智慧交通系統, 強化學習, 路由協定, 人工智慧, 深度學習, Vehicular Ad-hoc Network (VANET), Intelligent Transport System (ITS), Reinforcement Learning, Routing Protocol, Artificial Intelligence (AI), Position-awareness Routing, Deep Learning