淡江大學機構典藏:Item 987654321/125266
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64191/96979 (66%)
造訪人次 : 8506087      線上人數 : 7825
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/125266


    題名: Vision-based mobile collaborative robot incorporating a multi-camera localization system
    作者: Hsu;C, -C.;Hwang;P, -J.;Wang;W, -Y.;Wang;Y, -T.;Lu, and;C, -K.
    關鍵詞: Collaborative robot(cobot);mobile cobot;multicamera localization;natural language processing(NLP)
    日期: 2023-08-07
    上傳時間: 2024-03-12 12:06:15 (UTC+8)
    出版者: Institute of Electrical and Electronics Engineers
    摘要: As the Industry 4.0 landscape unfolds, collaborative robots (cobots) play an important role in intelligent manufacturing. Compared with industrial robots, cobots are more flexible and intuitive in programming, especially for industrial and home service applications; however, there are still issues to be solved, including understanding human intention in a natural way, adaptability to execute tasks, and robot mobility in a working environment. As an attempt to solve the problems aforementioned, in this article, we propose a modularized solution for mobile cobot systems, where the cobot equipped with a multicamera localization scheme for self-localization can understand the human intention via human voice commands to execute the tasks in an unseen scenario in a small-area working environment. As far as intention understanding is concerned, we devise a natural language processing approach to establish an action base to describe human commands. According to the action base, the robot can then execute the tasks by planning a trajectory with the help of an object localization module, which integrates the point cloud and the object detected by YOLOv4 to locate the object’s position in 3-D space. Depending on where the cobot interacts with the object, the cobot might need to navigate around the working environment. Thus, we also establish a low-cost and high-efficiency multicamera localization system with ArUco markers to locate the mobile cobot in a larger sensing area. The experimental results show that the proposed vision-based mobile cobot can successfully interact with a human operator to assemble a wooden chair in a small workshop.
    關聯: IEEE Sensors Journal 23(18), p.21853-21861
    DOI: 10.1109/JSEN.2023.3300301
    顯示於類別:[人工智慧學系] 期刊論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML73檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋