English  |  正體中文  |  简体中文  |  Items with full text/Total items : 49633/84879 (58%)
Visitors : 7692700      Online Users : 66
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/109571


    Title: Facial Sketch Synthesis Using 2D Direct Combined Model-Based Face-Specific Markov Network
    Authors: Ching-Ting Tu;Yu-Hsien Chan;Yi-Chung Chen
    Keywords: Direct Combined Model (DCM);Canonical Correlation Analysis (CCA);Markov Random Field (MRF);Face Sketch Synthesis, Statistical Image Models
    Date: 2016-08
    Issue Date: 2017-02-24 02:11:08 (UTC+8)
    Publisher: IEEE
    Abstract: A facial sketch synthesis system is proposed, featuring a 2D direct combined model (2DDCM)-based face-specific Markov network. In contrast to the existing facial sketch synthesis systems, the proposed scheme aims to synthesize sketches, which reproduce the unique drawing style of a particular artist, where this drawing style is learned from a data set consisting of a large number of image/sketch pairwise training samples. The synthesis system comprises three modules, namely, a global module, a local module, and an enhancement module. The global module applies a 2DDCM approach to synthesize the global facial geometry and texture of the input image. The detailed texture is then added to the synthesized sketch in a local patch-based manner using a parametric 2DDCM model and a non-parametric Markov random field (MRF) network. Notably, the MRF approach gives the synthesized results an appearance more consistent with the drawing style of the training samples, while the 2DDCM approach enables the synthesis of outcomes with a more derivative style. As a result, the similarity between the synthesized sketches and the input images is greatly improved. Finally, a post-processing operation is performed to enhance the shadowed regions of the synthesized image by adding strong lines or curves to emphasize the lighting conditions. The experimental results confirm that the synthesized facial images are in good qualitative and quantitative agreement with the input images as well as the ground-truth sketches provided by the same artist. The representing power of the proposed framework is demonstrated by synthesizing facial sketches from input images with a wide variety of facial poses, lighting conditions, and races even when such images are not included in the training data set. Moreover, the practical applicability of the proposed framework is demonstrated by means of automatic facial recognition tests.
    Relation: IEEE Transactions on Image Processing 25(8), pp.3546-3561
    DOI: 10.1109/TIP.2016.2570571
    Appears in Collections:[資訊工程學系暨研究所] 期刊論文

    Files in This Item:

    File Description SizeFormat
    Facial Sketch Synthesis Using 2D Direct Combined Model-Based Face-Specific Markov Network.pdf4914KbAdobe PDF2View/Open
    index.html0KbHTML60View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback