淡江大學機構典藏:Item 987654321/125266
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 64191/96979 (66%)
造访人次 : 8186372      在线人数 : 7331
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/125266


    题名: Vision-based mobile collaborative robot incorporating a multi-camera localization system
    作者: Hsu;C, -C.;Hwang;P, -J.;Wang;W, -Y.;Wang;Y, -T.;Lu, and;C, -K.
    关键词: Collaborative robot(cobot);mobile cobot;multicamera localization;natural language processing(NLP)
    日期: 2023-08-07
    上传时间: 2024-03-12 12:06:15 (UTC+8)
    出版者: Institute of Electrical and Electronics Engineers
    摘要: As the Industry 4.0 landscape unfolds, collaborative robots (cobots) play an important role in intelligent manufacturing. Compared with industrial robots, cobots are more flexible and intuitive in programming, especially for industrial and home service applications; however, there are still issues to be solved, including understanding human intention in a natural way, adaptability to execute tasks, and robot mobility in a working environment. As an attempt to solve the problems aforementioned, in this article, we propose a modularized solution for mobile cobot systems, where the cobot equipped with a multicamera localization scheme for self-localization can understand the human intention via human voice commands to execute the tasks in an unseen scenario in a small-area working environment. As far as intention understanding is concerned, we devise a natural language processing approach to establish an action base to describe human commands. According to the action base, the robot can then execute the tasks by planning a trajectory with the help of an object localization module, which integrates the point cloud and the object detected by YOLOv4 to locate the object’s position in 3-D space. Depending on where the cobot interacts with the object, the cobot might need to navigate around the working environment. Thus, we also establish a low-cost and high-efficiency multicamera localization system with ArUco markers to locate the mobile cobot in a larger sensing area. The experimental results show that the proposed vision-based mobile cobot can successfully interact with a human operator to assemble a wooden chair in a small workshop.
    關聯: IEEE Sensors Journal 23(18), p.21853-21861
    DOI: 10.1109/JSEN.2023.3300301
    显示于类别:[人工智慧學系] 期刊論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML68检视/开启

    在機構典藏中所有的数据项都受到原著作权保护.

    TAIR相关文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回馈