淡江大學機構典藏:Item 987654321/125168
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64191/96979 (66%)
造訪人次 : 8383899      線上人數 : 8037
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/125168


    題名: RLR: Joint Reinforcement Learning and Attraction Reward for Mobile Charger in Wireless Rechargeable Sensor Networks
    作者: Shang, C.;Chang, C. Y.;Liao, W.-H.;Roy, D. S.
    關鍵詞: Mobile charger;recharging mechanism;reinforcement learning;wireless rechargeable sensor network (WRSN)
    日期: 2023-09-15
    上傳時間: 2024-03-07 12:05:56 (UTC+8)
    出版者: Institute of Electrical and Electronics Engineers
    摘要: Advances in wireless charging technology give great new opportunities for extending the lifetime of a wireless sensor network (WSN) which is an important infrastructure of IoT. However, the existing greedy algorithms lacked learning from the experiences of energy dissipation trends. Unlike the existing studies, this article proposes a reinforcement learning approach, called reinforcement learning recharging (RLR), for mobile charger to learn the trends of WSNs, including the energy consumption of the sensors, the recharging cost as well as the coverage benefit, aiming to maximize the coverage contribution of the recharged WSN. The proposed RLR mainly consists of three modules, including sensor energy management (SEM), charger location update (CLP), and charger reinforcement learning (CRL) modules. In the SEM module, each sensor manages its energy and calculates its threshold for the recharging request in a distributed manner. The CLP module adopts the quorum system to ensure effective communication between sensors and the mobile charger. Meanwhile, the CRL module employs attraction rewards to reflect the coverage benefit and penalties of waiting time raised due to charger movement and recharging other sensors. As a result, the charger accumulates the learning experiences from the Q -Table such that it is able to execute the appropriate actions of charging or moving in a manner of state management. Performance results show that the proposed RLR outperforms the existing recharging mechanisms in terms of charging waiting time of sensors, the energy usage efficiency of the mobile charger, as well as the coverage contribution of the given sensor network.
    關聯: IEEE Internet of Things Journal 10(18), p. 16107-16120
    DOI: 10.1109/JIOT.2023.3267242
    顯示於類別:[資訊工程學系暨研究所] 期刊論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML57檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋