English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62805/95882 (66%)
Visitors : 3945018      Online Users : 601
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/122728


    Title: Moving Object Prediction and Grasping System of Robot Manipulator
    Authors: Ching-Chang Wong;Ming-Yi Chien;Ren-Jie Chen;Hisayuki Aoyama;Kai-Yi Wong
    Keywords: Moving object prediction;object grasping;long short-term memory (LSTM);convolutional neural network (CNN);you only look at the coefficients (YOLACT)
    Date: 2022-02-15
    Issue Date: 2022-06-02 12:11:29 (UTC+8)
    Abstract: In this paper, we designed and implemented a moving object prediction and grasping system that enables a robot manipulator using a two-finger gripper to grasp moving objects on a conveyor and a circular rotating platform. There are three main parts: (i) moving object recognition, (ii) moving object prediction, and (iii) system realization and verification. In the moving object recognition, we used the instance segmentation algorithm of You Only Look At CoefficienTs (YOLACT) to recognize moving objects. The recognition speed of YOLACT can reach more than 30 fps, which is very suitable for dynamic object recognition. In addition, we designed an object numbering system based on object matching, so that the system can track the target object correctly. In the moving object prediction, we first designed a moving position prediction network based on Long Short-Term Memory (LSTM) and a grasping point prediction network based on Convolutional Neural Network (CNN). Then we combined these two networks and designed two moving object prediction networks, so that they can simultaneously predict the grasping positions and grasping angles of multiple moving objects based on image information. In the system realization and verification, we used Robot Operating System (ROS) to effectively integrate all the programs of the proposed system for the camera image extraction, strategy processing, and robot manipulator and gripper control. A laboratory-made conveyor and a circular rotating platform and four different objects were used to verify that the implemented system could indeed allow the gripper to successfully grasp moving objects on these two different object moving platforms.
    Relation: IEEE Access 10, p.20159-20172
    DOI: 10.1109/ACCESS.2022.3151717
    Appears in Collections:[Graduate Institute & Department of Electrical Engineering] Journal Article

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML49View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback