English  |  正體中文  |  简体中文  |  Items with full text/Total items : 52047/87178 (60%)
Visitors : 8681597      Online Users : 95
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/111349

    Title: 跨領域自適應調整與分類方法應用於以樣本學習為基礎的個人肖像風格轉換技術
    Other Titles: Example-based headshot style transfer by using heterogeneous domain adaptation and classification
    Authors: 陳毅仲;Chen, Yi-Chung
    Contributors: 淡江大學資訊工程學系碩士班
    凃瀞珽;Tu, Ching-Ting
    Keywords: 馬柯夫隨機場;Local Binary Patten (LBP);Domain adaptation;Face illumination transfer;Markov Random Field;Adaboost
    Date: 2016
    Issue Date: 2017-08-24 23:50:36 (UTC+8)
    Abstract: 本論文提出一個肖像照片光線風格轉換系統;給定一張測試人臉影像與一群具特定光線風格的資料庫樣本影像,系統將自動化地改變系統輸入影像的光線樣貌。這樣的問題是困難的,因為光源於人臉影像上的呈現與光源與人臉的距離、人臉的三維幾何資訊等因素相關,若只單純的考慮輸入影像的二維資訊很難正確的預估光源於人臉特徵上的分佈。因為人臉影像具有一定的幾何架構與紋理樣貌,在圖形辨識與機器學習領域裡,如:超解析人臉影像合成或是人臉素描合成技術往往利用成對的樣本來學習不同風格之間的轉換。有鑑於此,本論文提出的轉換系統利用樣本學習方式來建立輸入影像訊號(source domain)與預期輸出影像訊號(target domain)之間的關聯性來解決二維輸入影像的不足。
    In this study, we proposed a system to transfer the style of portrait lighting. Given a testing image taken under a particular lighting style, the system automatically transfers its lighting condition into another lighting style. Generally speaking, lighting estimation process is difficult, since the light reflected from the facial skin is related to the position of lighting source and the three-dimensional facial geometry. However, only two-dimensional image information is required in the proposed system; first of all, a pseudo database is created to establish a correlation between two different lighting conditions. Consequently, we treated such an estimation process as a domain transfer problem. To distinguish the lighting geometry between these two lighting domains and to preserve personal characteristics, an AdaBoost-based approach is adopted to extract discriminative features. Following, the synthesis process of lighting style transformation is performed in a two-step based manner, where the lighting layer is synthesized first, and then the detailed textures. Especially, these two synthesis steps are formulated as graph models, and the extracted features are used as constraints embedded in the proposed models. The proposed framework was evaluated by Kelco, AR and Yale B face databases. According to our results and analysis, the proposed feature extraction step can significantly improve the final synthesized results. Compare with previous works, the proposed framework is less sensitive to the appearance diversity of training examples, and can apply to testing subjects that are less similar to training database.
    Appears in Collections:[資訊工程學系暨研究所] 學位論文

    Files in This Item:

    File Description SizeFormat

    All items in 機構典藏 are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback