English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 62568/95225 (66%)
造訪人次 : 2509027      線上人數 : 50
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/118331


    題名: Textureless Model Assembling Planning Based on a Deep Pose Estimation Network
    其他題名: 基於深度姿態估測網路之無紋理模型拼裝規劃
    作者: Kao, Wu-Kai;Wong, Ching-Chang;Tsai, Chi-Yi
    關鍵詞: Pose Estimation;Robot;CNN;Deep Learning;Robot Vision;System integration;Robot Operating System(ROS);Unreal Engine
    日期: 2019-07-11
    上傳時間: 2020-03-19 12:10:45 (UTC+8)
    摘要: This thesis proposes a deep pose estimation network applied to the textureless model assembly task, which aims to assemble six textureless models with different shapes into a complete aircraft model. We use ROS as the development environment to integrate the proposed pose estimation network and the control system of the 7-DoF manipulator to perform the assembly task, in which the target objects are randomly placed in the workspace. The proposed pose estimation network firstly extracts image feature maps of the input RGB image through the VGG network, and then performs object detection and attitude estimation through multi-task convolution layers. Since the target models are textureless objects, we found that using the original VGG network to extract feature maps cannot achieve a desired detection rate. Therefore, in order to improve the efficiency of image feature extraction, we modify the existing VGG network to improve the detection rate of textureless objects. In the network training, the supervised training method is used for multi-task training of the proposed network, which can use different loss functions for different tasks to update the weights of different networks, so that the deep convolutional neural network can predict the projection of the 3D bounding box of the training target onto the 2D image plane. With the output of the network model, the existing PnP algorithm can be used to estimate the relative pose information between the camera and the target object, so that the robot can locate the 3D coordinates of the target object and accurately grasp the target object to achieve the task of model assembly.
    顯示於類別:[電機工程學系暨研究所] 會議論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    index.html0KbHTML103檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋