淡江大學機構典藏:Item 987654321/123247
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64178/96951 (66%)
Visitors : 9929283      Online Users : 20093
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/123247


    Title: Human Action Recognition of Autonomous Mobile Robot Using Edge-AI
    Authors: Wang, Shih-Ting;Li, I-Hsum;Wang, Wei-Yen
    Keywords: Autonomous mobile robot (AMR);bidirectional long-short-term-memory (BiLSTM);edge artificial intelligence (Edge AI);human action recognition (HAR);ROS
    Date: 2023-01-15
    Issue Date: 2023-04-28 17:26:52 (UTC+8)
    Publisher: IEEE
    Abstract: The development of autonomous mobile robots (AMRs) has brought with its requirements for intelligence and safety. Human action recognition (HAR) within AMR has become increasingly important because it provides interactive cognition between human and AMR. This study presents a full architecture for edge-artificial intelligence HAR (Edge-AI HAR) to allow AMR to detect human actions in real time. The architecture consists of three parts: a human detection and tracking network, a key frame extraction function, and a HAR network. The HAR network is a cascade of a DenseNet121 and a double-layer bidirectional long-short-term-memory (DLBiLSTM), in which the DenseNet121 is a pretrained model to extract spatial features from action key frames and the DLBiLSTM provides a deep two-directional LSTM inference to classify complicated time-series human actions. Edge-AI HAR undergoes two optimizations—ROS distributed computation and TensorRT structure optimization—to give a small model structure and high computational efficiency. Edge-AI HAR is demonstrated in two experiments using an AMR and is demonstrated to give an average precision of 97.58% for single action recognition and around 86% for continuous action recognition.
    Relation: IEEE Sensors Journal 23(2), p.1671-1682
    DOI: 10.1109/JSEN.2022.3225158
    Appears in Collections:[Graduate Institute & Department of Mechanical and Electro-Mechanical Engineering] Journal Article

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML189View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback