English  |  正體中文  |  简体中文  |  Items with full text/Total items : 49433/84388 (59%)
Visitors : 7449791      Online Users : 63
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/109707

    Title: A GPU-Accelerated Adaptive Simultaneous Dynamic Range Compression and Local Contrast Enhancement Algorithm for Real-Time Color Image Enhancement
    Authors: Tsai, Chi-Yi;Huang, Chih-Hung
    Keywords: GPU acceleration;NVIDIA CUDA;Color image enhancement;Dynamic range compression;Local-contrast enhancement
    Date: 2015-09-01
    Issue Date: 2017-03-03 02:10:52 (UTC+8)
    Publisher: Springer International Publishing
    Abstract: Dynamic range compression is an important function used in modern digital video cameras and displays to improve visual quality of standard dynamic range color images. This chapter presents a real-time implementation of an adaptive contrast-enhancing image dynamic range compression algorithm on a graphics processing unit (GPU) for color image enhancement. To achieve this purpose, an image-dependent nonlinear intensity transfer function is first presented to produce a satisfactory dynamic-range compression result with less color artifacts. The proposed algorithm is then derived by combining the proposed nonlinear intensity transfer function with an existing simultaneous dynamic range compression and local-contrast enhancement (SDRCLCE) algorithm, which is a parallelizable method to compress image dynamic range while enhancing local contrast of output images. Finally, the proposed algorithm is implemented on the GPU by using NVIDIA Compute Unified Device Architecture (CUDA), achieving real-time performance in processing high-resolution color images. The proposed GPU-accelerated color image enhancement method had been implemented on a NVIDIA NVS 5200M GPU. Experimental results show that the proposed GPU implementation gains about 7 times acceleration, including the cost of memory copy between host and device, compared with a LUT-accelerated implementation on an Intel Core i7-3520M CPU for color images with size 1024 × 1024 pixels.
    Relation: Color Image and Video Enhancement
    DOI: 10.1007/978-3-319-09363-5_8
    Appears in Collections:[電機工程學系暨研究所] 專書之單篇

    Files in This Item:

    File Description SizeFormat

    All items in 機構典藏 are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback