English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64176/96941 (66%)
造訪人次 : 9173736      線上人數 : 14138
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/126327


    題名: Fusion of kinematic and physiological sensors for hand gesture recognition
    作者: Wang, Aiguo;Liu, Huancheng;Zheng, Chundi;Chen, Huihui;Chang, Chih-Yung
    關鍵詞: Gesture recognition;Electromyography;Accelerometer;Feature extraction;Cross-subject
    日期: 2024-01-29
    上傳時間: 2024-10-02 12:05:43 (UTC+8)
    摘要: The uncertainty of hand gestures, the variability of gestures across subjects, and the high cost of collecting a large amount of annotated data lead to a great challenge to the robust recognition of gestures, and thus it remains quite crucial to capture the informative features of hand movements and to mitigate inter-subject variations. To this end, we propose a gesture recognition model that uses two different types of sensors and optimizes the feature space towards enhanced accuracy and better generalization. Specifically, we use an accelerometer and a surface electromyography sensor to capture kinematic and physiological signals of hand movements. We use a sliding window to divide the streaming sensor data and then extract time-domain and frequency-domain features from each segment to return feature vectors. Afterwards, the feature space is optimized with a feature selector and a gesture recognizer is optimized. To handle the case where no labeled training data are available for a new user, we apply the transfer learning technique to reuse the cross-subject knowledge. Finally, extensive comparative experiments concerning different classification models, different sensors, and different types of features are conducted. Results show that the joint use of kinematic and physiological sensors generally outperforms the use of single sensor, indicating the synthetic effect of different sensors, and that the use of transfer learning helps improve the cross-subject recognition accuracy. In addition, we quantitatively investigate the impact of null gesture on a gesture recognizer and results indicate that null gesture would lower its accuracy, enlightening related studies to consider it.
    關聯: Multimedia Tools and Applications, vol. 83, pp. 68013-68040
    DOI: 10.1007/s11042-024-18283-z
    顯示於類別:[資訊工程學系暨研究所] 期刊論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML58檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋