English  |  正體中文  |  简体中文  |  Items with full text/Total items : 49287/83828 (59%)
Visitors : 7155076      Online Users : 85
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/88078


    Title: 應用類神經網路定位為基礎的主動視覺於人形機器人搜尋, 罰踢之研究
    Other Titles: "Search, track and kick to virtual target point" of humanoid robots by a neural-network-based active embedded vision system
    Search, track and kick to virtual target point of humanoid robots by a neural-network-based active embedded vision system
    Authors: 周尹喆;Chou, Yin-Che
    Contributors: 淡江大學電機工程學系碩士班
    黃志良;Hwang, Chih-Lyang
    Keywords: 人形機器人;罰踢;搜尋目標;影像定位;類神經網路建模;視覺導引;姿態調整;humanoid robot;Penalty kick;Searching Target;Image processing for localization;Modeling using multilayer neural network;Strategy for visual navigation;Posture revision
    Date: 2012
    Issue Date: 2013-04-13 11:59:46 (UTC+8)
    Abstract: 本研究實驗平台為一有 23 個自由度的小型人形機器人,其高度為 65 公分,重量 4 公斤,其核心系統為嵌入式單板電腦 RB-100,透過我們所設計的人機介面及整合電路控制相關馬達,使得機器人完成所指定的動作(例如,直走、轉彎、踢球)。所應用的嵌入式視覺系統為德州儀器的數位訊號處理器TMS320C6713和VM480CCD視覺模組,及其軟體系統(Code Composer Studio),藉此以實現類神經網路為基礎的主動搜尋及定位目標物體,並導引人形機器人執行罰踢。本研究包括如下四個部份:人形機器人的機構設計與步態規劃、視覺影像的處理、類神經網路為基礎的主動搜尋及定位目標球、導引人形機器人的策略。
    由於人形機器人的罰踢任務,最重要的因素之一即是目標球的定位準確性,因此本論文乃以類神經網路進行其定位,首先將影像視覺投影所形成的影像平面座標轉換到世界座標,進而導引人形機器人執行相關動作。即當視覺系統搜尋到目標球後,由視覺模組將擷取的影像輸入至TMS320C6713以進行包括二值化、中值濾波器去除雜訊、影像修正、計算目標球中心點位置的影像處理。以所訓練的類神經網路,即時並正確地計算目標球與人形機器人的方位與距離,進而導引人形機器人走向目標球所設定的位置。當機器人到達目標球前方約10公分後,視覺系統將會再搜尋球門,若存在球門則定位其虛擬中心位置,以進行機器人踢球的位置修正,當機器人踢球位置修正完成後,則執行踢球的動作;若球門不存在則往前踢球、接著追球、再搜尋球門,直至球門被搜尋到以完成罰踢之任務。最後以相關實驗證明所建議方法之有效性及可行性。
    In this thesis, the Texas Instruments TMS320C6713 digital signal processor, VM480CCD vision module, and the related software (Code Composer Studio), are employed to obtain the “Search, Track and Kick to Virtual Target Point” of humanoid robots by a neural-network-based active embedded vision system. A human machine interface is also designed by an on board computer (i.e., RB-100) to implement a variety of different actions, e.g., walking, turning and kicking, required by the task of penalty kick. The research is accomplished by the integration of the following four parts: the humanoid robot mechanism design and gait planning, the visual image processing for penalty kick, the neural network modeling for searching and positioning, and the strategy of visual navigation of a humanoid robot to execute penalty kick.
    One of the most important factors for penalty kick is to accurately locate the ball. In this situation, the relation between image plane coordinate and earth coordinate of a ball is modeled by neural network with suitable learning law. First, the CCD module captures the visual image and then sends it to the TMS320C6713 for the corresponding image processing, including binary segmentation, median filter to remove noise, image correction, calculation of ball center. These coordinates of ball and its ground truths are then applied to construct a neural network model between image plane coordinate and earth coordinate. When the humanoid robot is navigated in the vicinity of the ball (e.g., about 10 cm) by the trained neural network model, the visual system starts searching the goal. If it is found, then the posture revision of HR is made for the execution of penalty kick. If it is not found, the ball is kicked, then tracked, and repeated until the goal is found. Finally, the corresponding experiments are arranged to confirm the effectiveness and efficiency of the proposed method.
    Appears in Collections:[電機工程學系暨研究所] 學位論文

    Files in This Item:

    File SizeFormat
    index.html0KbHTML53View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback