淡江大學機構典藏:Item 987654321/35215
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62830/95882 (66%)
Visitors : 4044866      Online Users : 912
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/35215


    Title: Real-time dynamic background segmentation based on multiple reference value model
    Other Titles: 根據多重參考值模型之即時動態背景切割
    Authors: 彭建文;Peng, Jian-wen
    Contributors: 淡江大學資訊工程學系博士班
    洪文斌;Horng, Wen-bing
    Keywords: 背景切割;參考背景模型;Background segmentation;reference background model
    Date: 2007
    Issue Date: 2010-01-11 06:11:27 (UTC+8)
    Abstract: 在此論文中,對於參考背景的建立與維持提出了可靠且更準確的方法。背景的切割對於影像監視系統與相關的應用是很重要的,因為要辨識出目標﹙前景﹚之前,必須先知道在場景中哪些是前景而哪些是屬於背景。因此第一個步驟是替被偵測的場景建立一個參考背景模型,如此才能根據此參考背景擷取出前景。
    在現有的文獻中,在參考背景中的每一個像素都只有一個真實背景的參考值,而在此論文中所提出的multiple reference value background model (MRV background model)中的每一個像素卻有多個參考值,因此即使是再複雜或是紊亂的場景也能被正確地建立參考背景。
    背景切割中的另一個重要的步驟是背景的更新。因為被偵測的環境會一直在改變,例如移動的雲的影子或建築物的影子。因此參考背景也必須被修改以反映這些變化。否則,錯誤的前景就會因此產生。然而,這些會影響場景的因素卻很少被討論。在此論文中,全域更新與區域更新兩種策略一起被用來處理背景的更新;分別負責全部與部分的參考背景修改。因此MRV背景參考模型比其他的背景模型能正常運作更長的時間。此外,影響正確前景切割的因素也被詳細地探討。
    從實驗得知,MRV背景參考模型比其他的背景模型能得到更精確的參考背景與更詳細的前景細節。除此之外,由於可靠的背景更新策略,使得系統不僅能在白天與晚上正常運作,對於攝影鏡頭與場景中背景物體的晃動也有很好的抑制能力。
    In this thesis, we proposed a reliable and precise method for building and maintaining the reference background of a detected environment. Background segmentation (sub-traction) plays an important role in video surveillance systems and related applications. In order to extract the specific targets, applications must recognize what are objects (foreground) and what are not. Therefore, the fist step in such systems is usually to build a reference background model for the detected scene, and then the foreground can be extracted by comparing with the reference background model.
    In the existing literature, each pixel of a reference background model has only one reference value to the real background of the detected scene. However, each pixel in the proposed multiple reference value (MRV) background model may have multiple reference values. Thus, even in complex or disorder scenes, reference backgrounds can also be correctly built.
    Updating reference background is another important step for background segmen-tation. Because the detected scene will be changed, such as moving shadows of clouds or buildings, the reference background model must be modified to reflect these varia-tions. Otherwise, such applications will result in erroneous foreground segmentations. However, the situations of causing erroneous foreground segmentations are seldom discussed.
    In this thesis, a global update and a local update methods are employed as the strategies for a reference background update; they control the entire and partial modi-fication of a reference background model, respectively. Therefore, the reference back-ground model can be used more robust than other proposed models for a long period of surveillance. In addition, the situations of causing erroneous foreground segmentations are also discussed in details.
    By experimental results, the proposed method can obtain a more precise reference background model and preserves more details of a segmented foreground. Moreover, because of the reliable update strategies, the system can operate normally at daytime and nighttime. In addition, the system also can resist camera and object shaking.
    Appears in Collections:[Graduate Institute & Department of Computer Science and Information Engineering] Thesis

    Files in This Item:

    File SizeFormat
    0KbUnknown369View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback