淡江大學機構典藏:Item 987654321/125168
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 64191/96979 (66%)
造访人次 : 8273818      在线人数 : 7226
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/125168


    题名: RLR: Joint Reinforcement Learning and Attraction Reward for Mobile Charger in Wireless Rechargeable Sensor Networks
    作者: Shang, C.;Chang, C. Y.;Liao, W.-H.;Roy, D. S.
    关键词: Mobile charger;recharging mechanism;reinforcement learning;wireless rechargeable sensor network (WRSN)
    日期: 2023-09-15
    上传时间: 2024-03-07 12:05:56 (UTC+8)
    出版者: Institute of Electrical and Electronics Engineers
    摘要: Advances in wireless charging technology give great new opportunities for extending the lifetime of a wireless sensor network (WSN) which is an important infrastructure of IoT. However, the existing greedy algorithms lacked learning from the experiences of energy dissipation trends. Unlike the existing studies, this article proposes a reinforcement learning approach, called reinforcement learning recharging (RLR), for mobile charger to learn the trends of WSNs, including the energy consumption of the sensors, the recharging cost as well as the coverage benefit, aiming to maximize the coverage contribution of the recharged WSN. The proposed RLR mainly consists of three modules, including sensor energy management (SEM), charger location update (CLP), and charger reinforcement learning (CRL) modules. In the SEM module, each sensor manages its energy and calculates its threshold for the recharging request in a distributed manner. The CLP module adopts the quorum system to ensure effective communication between sensors and the mobile charger. Meanwhile, the CRL module employs attraction rewards to reflect the coverage benefit and penalties of waiting time raised due to charger movement and recharging other sensors. As a result, the charger accumulates the learning experiences from the Q -Table such that it is able to execute the appropriate actions of charging or moving in a manner of state management. Performance results show that the proposed RLR outperforms the existing recharging mechanisms in terms of charging waiting time of sensors, the energy usage efficiency of the mobile charger, as well as the coverage contribution of the given sensor network.
    關聯: IEEE Internet of Things Journal 10(18), p. 16107-16120
    DOI: 10.1109/JIOT.2023.3267242
    显示于类别:[資訊工程學系暨研究所] 期刊論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML53检视/开启

    在機構典藏中所有的数据项都受到原著作权保护.

    TAIR相关文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回馈