淡江大學機構典藏:Item 987654321/126784
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64178/96951 (66%)
Visitors : 9892714      Online Users : 18100
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/126784


    Title: Arbitrary style transfer system with split-and-transform scheme
    Authors: Tu, Ching-Ting;Lin, Hwei Jen;Tsai, Yihjia;Lin, Zi-Jun
    Date: 2024-01-12
    Issue Date: 2025-03-20 09:24:47 (UTC+8)
    Abstract: For the subject of arbitrary image style transfer, there have been some proposed architectures that directly compute the transformation matrix of the whitening and coloring transformation (WCT) to obtain more satisfactory transformation results. However, calculating the transformation matrix of WCT is time-consuming. Li et al. trained a linear transformation module to generate a WCT transformation matrix for any pair of images, i.e., content image and style image, to avoid complex calculations and improves time efficiency. In this work, we introduce a flexible arbitrary image style transfer framework based on the LST, which uses deep neural networks to train a linear transformation matrix as the standard matrix for WCT. For the first part, inverse relationship between the Whitening matrix and the Coloring matrix w.r.t. the same image is enforced during the training of the linear transformation matrix, so that the resulting matrix will be more accurate and ​​closer to the standard matrix of WCT. For the second part, a split-and-transform scheme is proposed. Unlike LST, which transforms the block of feature maps as a whole, the split-and-transform scheme divides the feature block into several smaller blocks and transforms them individually, so that the transformation is more localized, and the more the number of divided blocks, the more localized. In addition, the proposed split-and-transform scheme allows users to determine the number of divided blocks to flexibly control the locality of the transformations. Experimental results demonstrate the effectiveness and flexibility of the proposed framework by the high-quality stylized images and adjustable balance between globality and locality of transformations. The use of the split-and-transform scheme can reduce the computational time while preserving or even improving the stylization results.
    Relation: Multimedia Tools and Applications 83,p. 62497-62517
    DOI: https://doi.org/10.1007/s11042-023-16582-5
    Appears in Collections:[Graduate Institute & Department of Computer Science and Information Engineering] Journal Article

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML27View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback