English  |  正體中文  |  简体中文  |  Items with full text/Total items : 52047/87178 (60%)
Visitors : 8678223      Online Users : 53
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/102587

    Title: 組合式超解析人臉影像合成系統
    Other Titles: Face hallucination using ensemble face synthesis
    Authors: 羅章仁;Luo, Jang-Ren
    Contributors: 淡江大學資訊工程學系資訊網路與通訊碩士班
    凃瀞珽;Tu, Ching-Ting
    Keywords: 超解析人臉影像合成;影像表達;迴歸方程;Face hallucination;Image representation;Boosting;Regression
    Date: 2014
    Issue Date: 2015-05-04 09:59:30 (UTC+8)
    Abstract: 本文提出一個以學習樣本為依據的超解析人臉影像合成系統。這樣的問題是困難的,因為往往兩個在低解析影像看似相似的人於高解析會有相當大的鑑別性。為了有效率的使用有限的訓練樣本集以學習低和高解析之間的關係,過去的系統按不同的目標需求發展出不同的人臉影像特徵表達方式;如:直接處理整張人臉影像以維持整體的人臉幾何架構、局部人臉區塊為處理單位以解決訓練樣本數不足,或是設計不同影像一維二維Glocal的表達形式來提高資料分析上的效率。特別是即使是基於同一個核心演算法也會因影像的表達方式不同而有不同的合成結果。
    This study develops a face hallucination system based on a novel two-dimensional direct combined model (2DDCM) algorithm that employs a large collection of low-resolution/high-resolution facial pairwise training examples. This approach uses a formulation that directly combines the pairwise example in a 2D combined matrix while completely preserving the geometry-meaningful facial structures and the detailed facial features. Such a representation would be expected to yield a useful transformation for face reconstruction. Our algorithm achieves this goal by addressing four key issues. First, we establish the 2D combination representation that defines two structure-meaningful vector spaces to respectively describe the vertical and the horizontal facial-geometry properties. Second, we directly combine the low-resolution and high-resolution pairwise examples to completely model their relationship, thereby preserving their significant features. Third, we develop an optimization framework that finds an optimal transformation to best reconstruct the given low-resolution input. The 2D combination representation makes the transformation more powerful than other approaches. Fourth, specific to our framework, we will appropriately apply the proposed 2DDCM algorithm for modeling global and local properties of the facial image. Our approach is demonstrated by extensive experiments with high-quality hallucinated faces.
    Appears in Collections:[資訊工程學系暨研究所] 學位論文

    Files in This Item:

    File SizeFormat

    All items in 機構典藏 are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback