English  |  正體中文  |  简体中文  |  Items with full text/Total items : 51776/87004 (60%)
Visitors : 8385162      Online Users : 140
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/111367


    Title: 具重建力與鑑別力之混合子空間應用於人臉驗證技術
    Other Titles: Face verification by exploiting reconstructive and discriminative coupled subspaces
    Authors: 林孟穎;Lin, Meng-Ying
    Contributors: 淡江大學資訊工程學系碩士班
    凃瀞珽;Tu, Ching-Ting
    Keywords: 主成分分析;自適應增強;支持向量機;線性識別分析;Principal component analysis;Eigenface;Support Vector Machine;Adaboost;Linear Discriminant Analysis
    Date: 2016
    Issue Date: 2017-08-24 23:51:00 (UTC+8)
    Abstract: 本篇論文提出整合重建性與鑑別性之人臉驗證系統,系統輸入一對灰階人臉影像;輸出這對人臉影像為相同人或不同人。在真實情況下,通常監視系統所拍攝到的人臉影像會是低解析或遮蔽等,各種問題的人臉影像,而在辨識資料庫的人臉影像大多為高解析且非遮蔽的人臉影像,所以我們很難直接比對兩種不同屬性的人臉影像。
    為了解決這個問題,本篇論文提出整合重建性與鑑別性的人臉特徵,能夠有效的辨識兩種不同屬性的人臉影像。實作上,我們將兩種不同情境(如:遮蔽與非遮蔽、低解析與高解析)下之相同人的配對人臉影像與不同人的配對人臉影像分別訓練兩個PCA(Principal Components Analysis)子空間;其中,相同人臉配對建立的子空間用建立在不同情境下的樣貌關聯性,不同人配對所建立的子空間為人與情境等多種樣貌變異之關聯性。於兩種不同屬性的PCA子空間之特徵軸中,以Adaboost的架構挑選相對有鑑別相同人或不同人的能力的特徵軸。
    除此之外,我們也利用PCA的重建特性來解決遮蔽情境或是解析度不足狀況下的人臉驗證問題。實際的作法是,我們採用Adaboot挑選到的特徵軸所重組的子空間,以及原來兩種不同屬性的PCA子空間來重建更多樣性樣貌的成對人臉影像,當成人臉驗證的SVM分類器所使用的特徵。
    我們提出的演算法架構驗證於低解析、圍巾遮蔽或是戴太陽眼鏡的情境之下。特別是於遮蔽情境下,人臉具有嚴重的樣貌破壞,本文所提出的方法於這樣的情境下具有顯著的驗證率提升。
    Face verification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images in the database are high-quality while probe images are usually low-resolution or with heavy occlusion. This study, we proposed a regression-based approach for face verification in the low-quality scenario. We adopt principal component analysis (PCA) approach to construct the correlation between pairwise samples, where each sample contains heterogeneous pairwise facial image captured in terms of different modalities or features (e.g., low-resolution vs. high-resolution, or occluded facial image vs. non-occluded one). Three common feature spaces are reconstructed by cross-domain pairwise samples, with the goal of eliminating appearance variations and maximizing discrimination between different subjects. Such derived subspaces are then used to represent the subjects of interest, and achieve satisfactory verification performance. Experiments on a variety of synthesis-based verification tasks under low-resolution and occlusion cases would verify the effectiveness of our proposed learning framework.
    Appears in Collections:[資訊工程學系暨研究所] 學位論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML23View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback