English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64191/96979 (66%)
Visitors : 8388228      Online Users : 7958
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/92701


    Title: On Improving Fault Tolerance for Heterogeneous Hadoop MapReduce Clusters
    Authors: Lin, Chi-Yi;Chen, Ting-Hau;Cheng, Yi-No
    Contributors: 淡江大學資訊工程學系
    Keywords: MapReduce;heterogeneous environments;intermediate data;checkpointing;speculative execution
    Date: 2013-12-16
    Issue Date: 2013-10-22 11:20:56 (UTC+8)
    Publisher: Institute of electrical and electronics engineers (IEEE)
    Abstract: The computing paradigm of MapReduce has gained extreme popularity in the area of large-scale data-intensive applications in recent years. Hadoop, an open-source implementation of MapReduce, can be set up easily and rapidly on commodity hardware to form a massive computing cluster. In such a cluster, task failures and node failures are not an anomaly, which will cause a substantial impact on Hadoop’s performance. Although Hadoop can restart failed tasks automatically and compensate for slow tasks by enabling speculative execution, many researchers have identified the shortcomings of Hadoop’s fault tolerance. In this research, we try to improve them by designing a simple checkpointing mechanism for Map tasks, and using a revised criterion for identifying slow tasks. Specifically, our checkpointing mechanism saves the partial output produced by the Mappers, and our criterion for identifying slow tasks considers tasks with variable progress rates. By preliminary simulations, although the results show only marginal performance improvement compared with native Hadoop and the LATE scheduler, we believe that our approaches have the potential to offer greater performance gain on real workload.
    Relation: 2013 International Conference on Cloud Computing and Big Data (CloudCom-Asia 2013)
    Appears in Collections:[資訊工程學系暨研究所] 會議論文

    Files in This Item:

    There are no files associated with this item.

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback