English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62822/95882 (66%)
Visitors : 4018957      Online Users : 1087
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/96019


    Title: A Fast Sequential MRU Cache with Competitive Hardware Cost
    Authors: Chen, Hsin-Chuan;Chiang, Jen-Shiun;Lin, Yu-Sen
    Contributors: 淡江大學電機工程學系
    Keywords: 目前最常使用快取;集合關聯式快取;平均存取時間;硬體成本;平行架構;Most Recently Used Cache;Set Associative Cache;Average Access Time;Hardware Cost;Parallel Architecture
    Date: 2001-07
    Issue Date: 2014-02-13 11:35:30 (UTC+8)
    Abstract: The tradeoff between direct-mapped caches and set-associative cachesis an important issue in the research on the performance of caches.The set-associative caches with higher associativity provide lowermiss rate, however, they suffer from longer hit access time. MRU (mostrecently used) cache is one of the set-associative caches that addressimplementation of associativity higher than two. However, the accesstime is increased because the MRU information must be fetched beforeaccessing the MRU cache. In this paper, we propose a hardware schemethat separately divides tag memory and data memory into n banksassociated with two multiplexors to reduce the sequential search time.Applying this approach to the access organization of an MRU cache canimprove the access time of the sequential MRU cache. Furthermore, thefirst hit access time of the proposed architecture is almost equal tothat of the MRU cache with parallel search, but the hardwarecomplexity is less than that of the parallel search MRU cache. Theproposed hardware scheme provides an excellent average access timewhen the associativity is 4-way, and it could be applied to parallelarchitectures, such as the multiprocessor system, to increase theoverall system performance.
    Relation: 第二屆國際平行與分散式計算機應用及技術會議論文集=Proceedings,The Second International Conference on Parallel and Distributed Computing,Applications,and Technologies,頁220-227
    Appears in Collections:[Graduate Institute & Department of Electrical Engineering] Proceeding

    Files in This Item:

    File SizeFormat
    A Fast Sequential MRU Cache with Competitive Hardware Cost_英文摘要.docx15KbMicrosoft Word114View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback