English  |  正體中文  |  简体中文  |  Items with full text/Total items : 49287/83828 (59%)
Visitors : 7147392      Online Users : 80
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/76922


    Title: 人形機器人的視覺辨識、建模、導引及模仿之研究
    Other Titles: The VI Sual Recognition, Modeling, Navigation and Imitation of Humanoid Robots
    Authors: 黃志良
    Contributors: 淡江大學電機工程學系
    Keywords: Humanoid robot;visual recognition;visual modeling;visual navigation;visual imitation;omnidirectional vision;active vision;neural network;skin color recognition and tracking;imitation of 2D and 3D human (or humanoid robot) body and hand gesture
    Date: 2011
    Issue Date: 2012-05-22 21:55:04 (UTC+8)
    Abstract: 本計畫將著重於人形機器人的視覺辨識、建模、導引及模仿之研究。 本計畫包括四種視覺系統:(1)TIDMS320C6713_DSP 數位信號處理器並搭 配VM480CCD 視覺模組,(2)全方位視覺系統(VS-C14U-33/80-ST),並搭配 RB-100WinX86 微處理器,(3)WebCam影像視覺系統,並搭配RB-100WinX86 微處理器, (4) 立體視覺系統(Bumblebee 之BB2-0253) , 並搭配 RB-100WinX86 微處理器。本計畫的第一年任務,將比較全方位視覺系統及 其他三種需要搭配伺服控制系統,搜尋目標物體的視覺辨識及建模。主要 研究任務包括其硬體架構及操作系統、影像處理的方法、皮膚顏色的偵測 與辨識、(二和三維)人體(或另一人形機器人)及手姿勢的辨識、應用類神經 網路建立世界座標與影像平面座標之數學模式、誤差分析和比較。 根據第一年計畫及我們先前人形機器人的所獲得之研究成果,第二年 計畫將應用此四種視覺系統於人形機器人的導引及操控之研究。主要研究 任務包括(1)人形機器人之足球罰踢,(2)偵測障礙物位置及大小,並導引人 形機器人跨越障礙物,搜尋並抓取指定目標物體及走至指定位置放下它,(3) 導引人形機器人上下樓梯,搜尋並抓取指定目標物體及走至指定位置放下 它,以及(4)應用不同的手勢完成人形機器人的各種操控動作。 至於人形機器人的模仿之研究將於第三年計畫進行。首先透過視覺辨 識人類 (或另一人形機器人)身體及手的姿態,緊接著萃取相關關鍵目標點 之影像序列,經由學習讓人形機器人完成相同(或相似)的動作。例如,身體 姿勢之可為(1)五關鍵目標點:頭部、雙腳尖及手尖,(2)十三關鍵目標點: 左右肩膀、手肘、手腕、臀部、膝蓋、腳踝及身體中心;手勢之關鍵目標 點可為肩膀、手肘、手腕、手掌及手指。影像視覺模仿之學習包括如下三 個主要的步驟,即偵測、辨識和實現。我們應用第一年的視覺辨識和建模 及第二年的導引及操控之研究成果,進行人形機器人的模仿之研究:(1)應 用視覺觀測以模仿人類從某桌上抓取指定目標物體並置放於另一桌上的指 定位置,(2)藉由立體視覺系統觀測以模仿人類三維手勢,(3)藉由立體視 覺系統觀測以模仿另一人形機器人的三維身體姿勢。由以上敘述得知本計 畫之研究內容屬於高困難度及高創意之研究。
    The main theme of this project is focused on the visual recognition, modeling, navigation, and imitation of humanoid robots. Four kinds of embedded vision systems are considered in this project; (i) TIDMS320C6713_DSP with VM480CCD camera modular system, (ii) omni-directional vision system (VS-C14U-33/80-ST), (iii)webcam with RB-100 WinX86 microprocessor system, (iv) stereo vision (BB2-0253 of Bumblebee) with RB-100 WinX86 micro- processor system. The comparisons between the omni-directional vision system and the other three active vision systems for the search, recognition and modeling of target will be first investigated. The main research topics of the 1st year project include their I/O functions and operating systems, image processing, recognition and tracking of skin color, recognitions of 2D and 3D human (or humanoid robot) body and hand gesture, neural network modeling between world coordinate and image plane coordinate, and error analysis. Based on the results of the first year project and our previous project, the visual navigation via these four embedded vision systems will be addressed. The main research topics of the 2nd year project are (i) the penalty kick of humanoid robot, (ii) the detection of obstacle’s location and size, and stepping over the corresponding obstacle, (iii) the detection and grasping of specific object, and then the walking to a specific position for the release of the corresponding object, (iv) the stair climbing up and down of a humanoid robot, and then the walking to a specific position for the release of the corresponding object, and (v) the use of hand gesture for the manipulation of a humanoid robot. The consideration of the visual imitation of humanoid robots is addressed in the 3rd year. In the beginning, the recognition of human (or the other humanoid robot) body and hand gestures is obtained, and then the sequence extraction of key points is finished. After this learning by imitation, the same or similar task is performed. For instance, the key points for body posture can be head, tips of two feet, and tips of two hands; the key points for hand gesture can be shoulder, elbow, wrist, palm and finger. Based on the results of the 1st and 2nd year projects, the main research topics for the visual imitation of humanoid robots include (i) the visual imitation of the task from the grasp of a specific object on a table to the release of the object on the other table, (ii) the use of stereo vision system for the imitation of 3D hand gesture, and (iii) the use of stereo vision system for the imitation of 3D posture of the other humanoid robot.
    Appears in Collections:[電機工程學系暨研究所] 研究報告

    Files in This Item:

    There are no files associated with this item.

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback