Please use this identifier to cite or link to this item:
Title: Distributed Computing System and Big Data Real-time Processing Structure --Based on YARN, Storm and Spark
Other Titles: 分散式計算系統及巨量資料處理架構設計-基於YARN,Storm 及Spark
Authors: 劉文卿
Wen-Ching Liou, Po-Wei Tseng
Issue Date: Oct-2016
Publisher: 國立台灣師範大學圖書資訊學研究所
Graduate institute of library and information studies ,NTNU
Abstract: 近年來,隨著大數據時代的來臨,即時資料運算面臨許多挑戰。例如在期貨交易預測方面,為了精準的預測市場狀態,我們需要在海量資料中建立預測模型,且耗時在數十毫秒之內。在本研究中,我們將介紹一套即時巨量資料運算架構,這套架構將解決在實務上需要解決的三大需求:高速處理需求、巨量資料處理以及儲存需求。同時,在整個平行運算系統之下,我們也實作了數種人工智慧演算法,例如SVM(Support Vector Machine)和LR(Logistic Regression)等,做為策略模擬的子系統。本架構包含下列三種主要的雲端運算技術:1. 使用Apache YARN 以整合整體系統資源,使叢集資源運用更具效率。2. 為滿足高速處理需求,本架構使用 Apache Storm 以便處理海量且即時之資料流。同時,借助該框架,可在數十毫秒之內,運算上千種市場狀態數值供模型建模之用。3. 運用Apache Spark,本研究建立了一套分散式運算架構用於模型建模。藉由使用Spark RDD(Resilient Distributed Datasets),本架構可將SVM 和LR 之模型建模時間縮短至數百毫秒之內。為解決上述需求,本研究設計了一套n 層分散式架構且整合上列數種技術。另外,在該架構中,我們使用Apache Kafka 作為整體系統之訊息中介層,並支持系統內各子系統間之非同步訊息溝通。
With the coming of the era of big data, the immediacy and the amount of data computation are facing with many challenges. For example, for Futures market forecasting, we need to accurately forecast the market state with the model built from large data (hundreds of GB to tens of TB) within tens of milliseconds. In this paper, we will introduce a real‐time big data computing architecture to resolve requests of high speed processing, the immense volume of data and the request of large data processing. In the meantime, several algorithms, such as SVM (Support Vector Machine, SVM) and LR (Logistic Regression, LR), are implemented as a subproject under the parallel distributed computing system. This architecture involves three main cloud computing techniques: 1. Use Apache YARN as a system of integrated resource management in order to apply cluster resources more efficiently. 2. To satisfy the requests of high speed processing, we apply Apache Storm in order to process large real‐time data stream and compute thousands of numerical value within tens of milliseconds for following model building.3. With Apache Spark, we establish a distributed computing architecture for model building. By using Spark RDD (Resilient Distributed Datasets, RDDs), this architecture can shorten the execution time to within hundreds of milliseconds for SVM and LR model building. To resolve the requirements of the distributed system, we design an n‐tier distributed architecture to integrate the foregoing several techniques. In this architecture, we use the Apache Kafka as the messaging middleware to support asynchronous message‐based communication.
Other Identifiers: 3D74C371-05E0-C8D8-CDAC-E17D8CCA79BF
Appears in Collections:圖書館學與資訊科學

Files in This Item:
File SizeFormat 
ntnulib_ja_A1021_4202_025.pdf2.06 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.