English  |  正體中文  |  简体中文  |  Items with full text/Total items : 52052/87180 (60%)
Visitors : 8893906      Online Users : 107
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/35865

    Title: ESCOT之高效能字元級運算方塊編碼器架構設計
    Other Titles: High efficiency architecture of escot with word-level process block coding
    Authors: 黃鼎浩;Hwang, Ting-hao
    Contributors: 淡江大學電機工程學系碩士班
    江正雄;Chiang, Jen-shiun
    Keywords: 可調性視訊編碼;熵編碼;SVC;ESCOT;3D-EBCOT;Entropy coding
    Date: 2008
    Issue Date: 2010-01-11 07:15:05 (UTC+8)
    Abstract: 隨著通訊與多媒體儲存媒介的快速發展,人們對於視訊壓縮的需求日益增加,在要求品質的同時,還希望能夠兼顧跨平台與高壓縮率的特性,因此發展一種高效率之可調性視訊編碼標準(Scalable Video Coding, SVC)變的極為重要。
    此種可調性編碼器必須符合下列需求:SNR / Temporal / Spatial / Complexity / Region-of-interest / Object-based以及Combined scalability、Error resilience及graceful degradation、base-layer 相容性、低傳輸延遲、隨機存取功能、良好的編碼效率、Supposed interlaced video等等,而由微軟亞洲研究院(Microsoft Research Asia, MSRA)所提出的Barbell-lifting Wavelet-based SVC 即為具有此一特性的視訊編碼技術。新的編碼技術運算複雜度非常高,幾乎無法以純軟體方式達成及時處理的目的,因此本論文研究透過設計硬體專屬硬體晶片的方式來提升運算速度。
    在可調性視訊編碼當中影響運算複雜度最多的分別是動態預測(Motion Estimation)與熵編碼(Entropy Coding),前者可以透過各種不同的演算法來降低複雜度,後者我們提出了一個專屬的硬體以期能提升整體之運算速度。可調性視訊編碼所使用的熵編碼Embedded Sub-band Coding with Optimized Truncation (ESCOT)中,需要較大的記憶體空間,因為每一個編碼方塊(Block)需要經過三次編碼所以較浪費運算時間,故造成不必要的功率消耗。本論文提出一個高效能的ESCOT硬體架構,以平行處理技術將編碼次數從三次降為一次提高了運算效能,同時能夠減少演算法的記憶體需求量達40%,並藉由增加少量硬體達到加快運算速度的目的與傳統架構相比此架構不但速度更快同時成本為低廉。
    As the urgent demand of video sequence in multimedia applications, the video sequence compression technique becomes more and more important. It does not only require high video quality and compression efficiency, but also needs more new functions to develop more applications. Scalable Video Coding (SVC) is a novel and high efficiency coding technique and is expected as the next video sequence compression standard. It has better compression efficiency, superior video quality, error resilience, and enhanced functions than MPEG-2 and MPEG-4. The aim of SVC is to develop wide multimedia access services such that users can get multimedia information through variable devices from different locations and different platforms.
    Microsoft Research Asia (MSRA) proposed the Barbell-lifting wavelet based SVC that is used the 3D wavelet transom decomposition the video sequence into different sub-bands and each sub-band is independently coded with entropy coding to be compressed. SVC is the high complexity technological and it is unable to purpose real time by software. So we must design the exclusive hardware to improve the speed of operation.
    This thesis proposes a new Block Coding for ESCOT called Word-Level Process Block Coding. The Word-Level Block Coding completed two part Word-Level Process Pass Concurrent Context Modeling (Word-Level Process PCCM) and Custom Arithmetic Encoder (Custom-AE) that increase the coding efficiency and throughput of ESCOT. The Word-Level Block Coding merges the 3-pass coding to a single pass coding. In order to reduce the requirement of the internal memory for Context Modeling and the Word-Level Block Coding works in word-level operation that parallel encode multilayer bit-plans can be reduced more than 80%. Besides, the Word-Level Block Coding encodes 8 samples from 4 different bit-planes concurrently to increase the context modeling operation speed further that can support for 1080p with 60fps at clock rate of 125MHz. The proposed architecture of word-level Block Coding can increase both the operation efficiency and hardware cost significantly.
    Appears in Collections:[電機工程學系暨研究所] 學位論文

    Files in This Item:

    File SizeFormat

    All items in 機構典藏 are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback