English  |  正體中文  |  简体中文  |  Items with full text/Total items : 49633/84879 (58%)
Visitors : 7691734      Online Users : 92
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/52363


    Title: 一個基於區塊的正交區域保留投影之人臉超解析技術
    Other Titles: A block-based orthogonal locality preserving projection method for face super-resolution
    Authors: 吳哲明;Wu, Che-ming
    Contributors: 淡江大學資訊工程學系碩士班
    顏淑惠;Yen, Shwu-huey
    Keywords: 人臉超解析化;區域保留投影;類神經網路;正交;流形;廣義迴歸網路;Face Super-Resolution;Locality Preserving Projection;Neural Network;Orthogonal;Regression;Manifold
    Date: 2010
    Issue Date: 2010-09-23 17:34:38 (UTC+8)
    Abstract: 監視系統的攝影機常常都只能拍下低解析度的影像,所以我們也很難去看清楚影像裡的人物。為了解決這個問題,便開始有很多的專家學者提出一些不同的方法,來針對這些影像做超解析化。
    在這裡我們也針對人臉影像的超解析化,提出了一個使用正交區域保留投影OLPP(Orthogonal Locality Preserving Projections)的技術。目的是發現區域鄰接的幾何流形結構,產生正交的基底函數。我們利用區塊來完成臉部超解析化的演算法。首先將訓練資料庫裡的低解析度人臉影像根據五官位置分割成4個區塊,分別利用PCA(Principal Component Analysis)降低維度,再建構出OLPP轉換矩陣。另一方面,利用已知資料的低解析影像的OLPP係數與相對應的高解析影像訓練廣義迴歸網路 GRNN(General Regression Neural Network)。對於一張待重建的低解析影像,一樣分成4個區塊,分別已建構好的OLPP求出它們的係數。這些係數再分別輸入已經訓練好的GRNN,將區塊重建成原來的解析度。根據實驗結果,本文所提的方法都有滿意的結果。
    Due to cost consideration, the quality of images captured from surveillance systems usually is poor. It makes the face recognition difficult in these low-resolution images. Here we propose a block-based algorithm called Orthogonal Locality Preserving Projections (OLPP) for super-resolution of face images. The purpose is to discover the local structure of the manifold and produce orthogonal basis functions for face images.
    To train the system, we divide the high-resolution images and the corresponding low-resolution images into 4 blocks (forehead, eyes, nose, and mouth). For each block, we use the low-resolution ones to find an OLPP transformation matrix. Then, use the obtained coefficients from the OLPP (input) and the corresponding high-resolution one (target) to train a GRNN (General Regression Neural Network). For an unseen low-resolution face image, it is divided into 4 blocks similarly and the corresponding coefficients for each block are obtained by the trained OLPP transformation matrix. Finally, an improved super-resolution block is obtained by feeding the coefficients of OLPP into GRNN. And a super-resolution face image is achieved by combining all blocks. Comparing to existing methods, the proposed method has shown an improved and promising results.
    Appears in Collections:[資訊工程學系暨研究所] 學位論文

    Files in This Item:

    File SizeFormat
    index.html0KbHTML204View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback