淡江大學機構典藏:Item 987654321/122216
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 62797/95867 (66%)
造訪人次 : 3741418      線上人數 : 513
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/122216


    題名: Lighting-and Personal Characteristic-Aware Markov Random Field Model for Facial Image Relighting System
    作者: Ching-Ting Tu;Hwei-Jen Lin;Chin-Yu Chang
    關鍵詞: Face image synthesis;Markov random field;Adaboost
    日期: 2022-02-09
    上傳時間: 2022-02-24 12:10:59 (UTC+8)
    出版者: IEEE Access
    摘要: In this study, we proposed an example-based facial image relighting framework to relight an unseen input 2D facial image from a specified source lighting condition to a target lighting. Such a relighting framework is highly challenging, since the highlight or
    shadow areas in the relighted image usually follow the specific facial feature characteristics of the input image. When facial images have to be lighted in a specified lighting condition style followed a few provided pairwise lighting examples, the problem becomes even more complex. In contrast to existing learning-based relighting framework that directly predicts the intensity of the target image, we use a personal
    and lighting-specific transformation to formulate the appearance correlation between source and target lighting conditions. We propose a
    lighting-and personal characteristic-aware Markov Random Field Model to estimate the transformation parameter of the input subject, which
    is integrated with additional classifiers to determine input facial regions should be enhanced or maintained. Experimental results show that
    the proposed kernel facial relighting model can avoid overfitting problem and generates vivid and recognizable results despite the scarcity
    of training samples. Furthermore, the relighted results successfully simulate the individual lighting effects produced by the specific personal characteristics of the input image, such as the nose and cheek shadows. Finally, the effectiveness of the proposed framework is demonstrated using a robust face verification test for images taken under side-light conditions.
    關聯: IEEE Access 10, p.20432 - 20444
    DOI: 10.1109/ACCESS.2022.3150341
    顯示於類別:[資訊工程學系暨研究所] 期刊論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML59檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋