淡江大學機構典藏:Item 987654321/105521
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62819/95882 (66%)
Visitors : 3999826      Online Users : 664
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/105521


    Title: 關聯式資料庫結合NoSQL特性處理時間序列資料之研究
    Other Titles: A study on time series data processing with relational database and NoSQL feature
    Authors: 黃聖文;Huang, Sheng-Wen
    Contributors: 淡江大學資訊管理學系碩士在職專班
    吳錦波;Jiin-Po,Wu
    Keywords: 巨量資料;時間序列;NoSQL;Informix;Big Data;Timeseries
    Date: 2015
    Issue Date: 2016-01-22 14:58:10 (UTC+8)
    Abstract: 關聯式資料庫在資訊歷史發展過程中,佔有相當大的份量,發展相當成熟,而近年來在軟硬體技術的發展,資料的類型格式開始多元化,處理的速度需求持續增加,資料的成長量也開始無法負荷,逐漸衍伸了巨量資料的議題。
    本研究的目的是在關聯式資料庫中,模擬企業內部可能產生的大量資料,透過延展NoSQL資料庫的部分特性,處理具有時間特性持續產生的巨量資料。使用IBM Informix推出的混合型資料庫進行實驗,透過腦波儀模擬儀器設備取得腦波十四個波段,持續不斷產生的腦波資料,將資料透過三種不同實驗特性寫入資料庫,比較關聯式資料架構、時間序列資料架構、時間序列資料結合JSON資料格式架構,進行資料處理的時間與架構異動的成本比較,並將讀取儀器設備之資料進行分類,進行分析模擬並呈現。
    經過實驗的結果,在關聯式資料庫的基礎架構上,延展時間序列資料與JSON儲存格式的特性,可快速達到存取資料的目的,在分析速度上更加的即時,並可減少處理資料與架構異動的時間成本。企業可參照本研究架構進行應用,來達成傳統資料庫中結合NoSQL特性處理大量時間序列資料的目的。
    Relational databases, in the history IT applications, have played an important role and been fairly mature. However, in recent years, as the advances of hardware and software technologies, data are getting more diversified in format. The quest for processing speed continues, and the growth of data volume has become a burden. Hence, it comes with big data issues that need to be solved.
    The purpose of this study is to simulate the large amounts of data that a company might continuously generate through the use of relational databases with some features of extended NoSQL database. We use hybrid database from IBM Informix for this simulation. We also use a fourteen-channel electroencephalogram (EEG) to collect brain wave, as the surrogate of big data generated. Then, write brain wave data into the database with the purpose to compare three different experimental features: relational data architecture, time series data architecture, and time series data with JSON data format architecture based on the cost of data processing time and architecture change. The collected data are then classified and analyzed.
    The experimental results show that the extended time series data and JSON data format achieve the processing need of big data. Not only can it access data faster, but it also analyze data more quickly. The results, furthermore, show the reduction both in cost of processing data and changing architecture. This study could provide references to companies who want to solve problems in processing large number of time-series data by combining traditional repository with NoSQL features.
    Appears in Collections:[Graduate Institute & Department of Information Management] Thesis

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML124View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback