Please use this identifier to cite or link to this item: http://rportal.lib.ntnu.edu.tw:80/handle/77345300/36400
Title: 以完美雜湊優化樣式比對記憶體架構設計
Optimization of Pattern Matching Memory Architecture Using Perfect Hashing
Authors: 國立臺灣師範大學科技應用與人力資源發展學系
林政宏
Issue Date: 31-Jul-2013
Abstract: 樣式比對(pattern matching)在網路入侵偵測系統(Network Intrusion Detection System, NIDS) 中扮演最重要的角色,決定了一個網路入侵偵測系統的效能。由於網路攻擊的大量增加與網路的複雜 度,傳統循序的樣式比對演算法已經不適用於目前的高速網路。因此,加速與優化樣式比對電路與演 算法已成為近年來重要國際會議包括INFOCOM、GLOBECOM、SIGCOMM、ISCA、ANCS等之重要 議題。 在過去的研究中,我們提出一個平行化的字串樣式比對演算法,稱之為PFAC (Parallel-Failureless Aho-Corasick algorithm),適用於圖形處理器(graphics processing units, GPUs)與多核心中央處理器 (central processing unit, CPU)。實現於NVIDIA的高階圖形處理器,例如GTX 480 and GTX 580,吞吐 量(throughput)更可以高達100Gbps以上,相較於傳統實現於CPU的Aho-Corasick algorithm,在效能上 十倍以上的改善;研究成果已陸續或預計發表於GLOBECOM 2010、GLOBECOM 2011、21st VLSI Design/CAD symposium(最佳論文候選)、22nd VLSI Design/CAD symposium、GTC 2012、INFOCOM 2012。除此,我們並製作程式庫PFAC發表於Google code (http://code.google.com/p/pfac/),已獲得許多 使用者的正面回饋。 我們所提出的PFAC演算法乃植基於記憶體架構的樣式比對方法,適用於多核心的CPU與GPU。 記憶體架構樣式比對方法的運作是先將大量的病毒樣式編譯成一個deterministic finite automaton (DFA),再藉由traverse它的state transition table來執行樣式比對。由於網路攻擊樣式的大量增加,降低 state transition table的記憶體需求已成為記憶體架構的重要議題。在這個計畫中,我們首先將針對記憶 體架構中儲存state transition table,提出優化方法與架構,以有效降低記憶體需求。此外,我們將進一 步研究以Amazon Elastic Compute Cloud (Amazon EC2)提供之高速NVIDIA GPU computing resources架 構GPU cluster,以提升大量資料樣式比對的吞吐量。
Pattern matching plays the most important role in a network intrusion detection system (NIDS) and dominates the performance of an NIDS. Due to the ever-increasing number of attack patterns and network complexity, traditional sequential string matching algorithms and architectures have been inadequate to current high speed networks. Therefore, accelerating and optimizing pattern matching circuits and algorithms has been a critical topic in many important conferences, such as INFOCOM, GLOBECOM, SIGCOMM, ISCA, and ANCS, etc. In our past researches, we have proposed a parallel string matching algorithm, called Parallel-Failureless Aho-Corasick (PFAC) algorithm which is adequate to be performed on graphics processing units (GPUs) and multi-core central processing unit (CPUs). Performing on NVIDIA high-end GPUs, such as GTX480 and GTX580, PFAC achieves over 100Gbps throughput, an order of magnitude speedup compared to the traditional AC algorithm performing on multi-core CPUs, such as Intel Core i7. We have presented and will present the relative results on GLOBECOM 2010, GLOBECOM 2011, 21st VLSI Design/CAD symposium (best paper nominee)、22nd VLSI Design/CAD symposium, GTC 2012, and INFOCOM 2012. In addition, we have released the library and source code of PFAC on Google code (http://code.google.com/p/pfac/) and have obtained many positive responses. The PFAC algorithm is based on memory architectures and is adequate to be performed on multi-core CPUs and GPUs. In memory architectures, attack patterns are first compiled into a deterministic finite automaton (DFA) and then pattern matching is performed by traversing the corresponding state transition table. Because attack patterns increase tremendously, reducing the memory requirement of the storing state transition table has become imperative for memory architectures. In this project, we first attempt to propose a new architecture to reduce the memory requirement of storing the state transition table. We intend to implement the new architecture on graphic processing units (GPU) and evaluated the performance of the proposed architecture using attack patterns from Snort V2.8 and input packets from DEFCON. Furthermore, to increase the throughput of performing pattern matching on large scale data inputs, we intend to construct a GPU cluster using high-speed NVIDIA GPU computing resources provided by Amazon Elastic Compute Cloud (Amazon EC2).
URI: http://rportal.lib.ntnu.edu.tw/handle/77345300/36400
Other Identifiers: ntnulib_tp_E0213_04_008
Appears in Collections:教師著作

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.