淡江大學機構典藏:Item 987654321/87926
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62822/95882 (66%)
Visitors : 4028311      Online Users : 567
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/87926


    Title: The motion inpainting based on motion vectors and feature points
    Other Titles: 以動量與特徵點為基礎的動態修補演算法
    Authors: 蔡程緯;Tsai, Joseph C.
    Contributors: 淡江大學資訊工程學系博士班
    顏淑惠;Yen, Shwu-Huey
    Keywords: 影片修補;影片製作;動作分析;動態修補;Video Inpainting;Video Production;Motion analysis;Motion Inpainting
    Date: 2013
    Issue Date: 2013-04-13 11:52:38 (UTC+8)
    Abstract: 圖像以及影片修補的技術已經被很廣泛的應用在日常生活中。在過去幾年中,影片修補可以針對攝影機無移動、有移動的幾乎靜止的背景進行物件的取出。然而,要從動態的背景(如煙霧、火焰以及河流)移除物件則是相當困難的研究。利用現有的影像修補演算法進行處理的話,往往會導致動態結構不連續的問題產生。因此在本研究中,我提出一個全新的修補演算法來解決動態結構不連續的問題。我使用一套不同於以往修補演算法的搜尋區塊的方法,結合邊緣、色彩以及動量的資訊進行區塊的尋找,再透過找出最小能量的隙縫進行區塊影片的時間延長,藉此將找出的區塊時間可以與原始影像相同,最後再利用圖片切割以及泊松方程進行區塊的貼入。除了提出這個演算法之外,我還提出一個方法進行修補結果的判斷,讓使用者可以知道修補的效果。我的這個技術可以利用在一些特效的製作以及影片的後置。
    Image and video inpainting technologies were studied in the literature. In the past few years, video inpainting methods remove objects from stationary or non-station videos, with mostly static backgrounds. However, to remove objects in a dynamic background, such as fire or smoke scene, most video inpainting algorithms result in a discontinuous visual effect. Although there are several technologies that can be used to generate dynamic textures, there still exists problems for inpainting, such as bad motion continuity due to improper color or motion with respect to the original video. We propose a novel inpainting algorithm to solve the motion textures problem, called motion inpainting. A few steps are introduced in the algorithm, including searching for motion patches from different time slots, extending motion streams, and motion patch blending. We also propose a mechanism to evaluate motion coherence in our experiments. The algorithm is generic and can be used in special effect applications.
    Appears in Collections:[Graduate Institute & Department of Computer Science and Information Engineering] Thesis

    Files in This Item:

    File SizeFormat
    index.html0KbHTML151View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback