English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64191/96979 (66%)
Visitors : 8276705      Online Users : 7868
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/35874


    Title: 使用於動態影像編碼之區塊式移動估測演算法
    Other Titles: The block-based motion estimation algorithms in video coding
    Authors: 林漢庭;Lin, Han-ting
    Contributors: 淡江大學電機工程學系碩士在職專班
    江正雄;Chiang, Jen-shiun
    Keywords: 移動估測;視訊編碼;區塊模式;Motion estimation;Video coding;Block-based
    Date: 2006
    Issue Date: 2010-01-11 07:15:45 (UTC+8)
    Abstract: 在本篇論文中,我們總共提出了三個適用於區塊式動態影像編碼之移動估測演算法。區塊式移動估測已於十數年來廣泛使用於多種動態影像壓縮標準,例如MPEG-1、MPEG-2、MPEG-4、與H.263等。自從2001年Joint Video Team的MPEG以及VCEG協會共同開發H.264/MPEG-4 Part10 Advanced Video Coding後,其許多新的特性更是一直為大家所討論的重點,其中多張參考畫面更是大大地增加位移估測演算法搜尋能力。雖然新的標準壓縮效能方面遠勝過其他先前所提出的標準,但是其編碼的計算高複雜度卻容易導致無法達到即時(real-time)的問題。對於目前H.264標準的參考軟體, JM10.1版本內所採用的快速移動估測方法為非對稱多解析度六角搜尋演算法(UMHexagonS),在移動估測中,為了尋找到最佳的移動向量,UMHexagonS演算法的混合式移動搜尋策略方式明顯的優於其他演算法(FS, 3SS, 4SS, DS,…etc.)。但是,相較於其他演算法,UMHexagonS演算法的運算複雜度以及不易被實現的問題也隨之而產生。在本論文中,我們針對這個問題,提出一個單純並有效率的移動搜尋演算法,所提的演算法藉由更精確的預測初始搜尋中心能避免尋找到局部預測錯誤,並讓硬體實現問題能被有效解決。實驗結果顯示,本篇論文當中對此所提的新演算法可明顯的減少位移向量估測時間,並大幅度降低搜尋點數;並且客觀影像評估(Peak Signal to Noise Ratio)以及位元率(bit-rate)方面與H.264採用的UMHexagon演算法幾乎可與其比擬。另,除了針對H.264提出新的演算法之外;基於區塊式比對方法的特性,我們亦發展出了二種新型的動態影像移動估測演算法,稱之為混合型預測式六角暨三步搜尋法(HPHTS)與鑽石-弧形-六角混合式搜尋法(DAHS)。在這二個演算法中,對於小動作與大動作的預測,也皆能達到良好的動態影像預測效果與提昇搜尋速度。
    In the past years, video coding experts from ITU-T H.264 and ISO/IEC MPEG-4 Advanced Video Coding (AVC) group formed the Joint Video Team (JVT) to develop the emerging standard. H.264/AVC has achieved significant rate-distortion efficiency by many useful video encoding and decoding tools. Compared with H.263, the new technique includes motion estimation (ME) with variable block sizes and multiple reference frames, intra prediction, 4×4 block based residue coding, adaptive block size transform, in-loop deblocking filter, …, etc. However, the motion estimation process concerns greatly on computational complexity. Hence, the algorithm on fast motion estimation becomes one of the most important issues in the development of H.264/AVC.
    In the new version reference software JM 10.1 of H.264 standard, the UMHexagonS motion estimation algorithm is adopted to find the best motion vector for video coding. The hybrid search strategies have significantly outperformed other algorithms (FS, 3TT, 4TT, DS, …, etc.). However the highly computational complexity leads the codec to become too complex and hard to be implemented in real time applications. In this work, we propose an efficient algorithm by using the precision initial search and simple search strategies to finish the motion estimation. Experimental results indicate that the proposed method can obtain good performances. Through the proposed features, the coding performance can be improved significantly, and the computation complexity of the integer pixel motion estimation of H.264 is also decreased tremendously. In this thesis, a new fast motion estimation algorithm, Hierarchical Predictable Hexagon Search (HPHS), is proposed for H.264.
    In addition to the proposed HPHS for H.264 video coding standard, we also developed two new algorithms for block-matching motion estimation, Hybrid Predictable Hexagon with Three-Step Search (HPHTS) and Diamond-Arc-Hexagon Search (DAHS). These two algorithms both achieve good performance in search speed and the prediction of the video quality.
    Appears in Collections:[電機工程學系暨研究所] 學位論文

    Files in This Item:

    File SizeFormat
    0KbUnknown338View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback