淡江大學機構典藏:Item 987654321/126706
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64185/96959 (66%)
造訪人次 : 11667505      線上人數 : 21299
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/126706


    題名: Effectively learn how to learn: a novel few-shot learning with meta-gradient memory
    作者: Hui, Lin
    關鍵詞: machine learning;deep learning;meta-learning;continual learning
    日期: 2024-03-25
    上傳時間: 2025-03-06 14:08:49 (UTC+8)
    摘要: Recently, the importance of few-shot learning has tremendously grown due to its widespread applicability. Via few-shot learning, users can train their models with few data and maintain high generalisation ability. Meta-learning and continual learning models have demonstrated elegant performance in model development. However, unstable performance and catastrophic forgetting are still two fatal issues with regard to retaining the memory of knowledge about previous tasks when facing new tasks. In this paper, a novel method, enhanced model-agnostic meta-learning (EN-MAML), is proposed for blending the flexible adaptation characteristics of meta-learning and the stable performance of continual learning to tackle the above problems. Based on the proposed learning method, users can efficiently and effectively train the model in a stable manner with few data. Experiments show that when following the N-way K-shot experimental protocol, EN-MAML has higher accuracy, more stable performance and faster convergence than other state-of-the-art models on several real datasets.
    關聯: International Journal of Web and Grid Services 20(1), p.3
    DOI: 10.1504/IJWGS.2024.137549
    顯示於類別:[資訊工程學系暨研究所] 期刊論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML22檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋