淡江大學機構典藏:Item 987654321/122728
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 62797/95867 (66%)
造訪人次 : 3750632      線上人數 : 454
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/122728


    題名: Moving Object Prediction and Grasping System of Robot Manipulator
    作者: Ching-Chang Wong;Ming-Yi Chien;Ren-Jie Chen;Hisayuki Aoyama;Kai-Yi Wong
    關鍵詞: Moving object prediction;object grasping;long short-term memory (LSTM);convolutional neural network (CNN);you only look at the coefficients (YOLACT)
    日期: 2022-02-15
    上傳時間: 2022-06-02 12:11:29 (UTC+8)
    摘要: In this paper, we designed and implemented a moving object prediction and grasping system that enables a robot manipulator using a two-finger gripper to grasp moving objects on a conveyor and a circular rotating platform. There are three main parts: (i) moving object recognition, (ii) moving object prediction, and (iii) system realization and verification. In the moving object recognition, we used the instance segmentation algorithm of You Only Look At CoefficienTs (YOLACT) to recognize moving objects. The recognition speed of YOLACT can reach more than 30 fps, which is very suitable for dynamic object recognition. In addition, we designed an object numbering system based on object matching, so that the system can track the target object correctly. In the moving object prediction, we first designed a moving position prediction network based on Long Short-Term Memory (LSTM) and a grasping point prediction network based on Convolutional Neural Network (CNN). Then we combined these two networks and designed two moving object prediction networks, so that they can simultaneously predict the grasping positions and grasping angles of multiple moving objects based on image information. In the system realization and verification, we used Robot Operating System (ROS) to effectively integrate all the programs of the proposed system for the camera image extraction, strategy processing, and robot manipulator and gripper control. A laboratory-made conveyor and a circular rotating platform and four different objects were used to verify that the implemented system could indeed allow the gripper to successfully grasp moving objects on these two different object moving platforms.
    關聯: IEEE Access 10, p.20159-20172
    DOI: 10.1109/ACCESS.2022.3151717
    顯示於類別:[電機工程學系暨研究所] 期刊論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML45檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋