English  |  正體中文  |  简体中文  |  Items with full text/Total items : 52333/87441 (60%)
Visitors : 9098300      Online Users : 304
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/88081

    Title: 加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較
    Other Titles: A comparison of vision-based autonomous navigation for target grasping of humanoid robot by enhanced SIFT and traditional HT algorithms
    Authors: 蘇上凱;Su, Shang-Kai
    Contributors: 淡江大學電機工程學系碩士班
    黃志良;Hwang, Chih-Lyang
    Keywords: 尺度不變特徵轉換;3D環境自主立體視覺;多層類神經網路建模;霍夫轉換;視覺導引;目標物抓取;人形機器人;SIFT;Active stereo vision for 3-D localization;Modeling using multilayer neural network;Hough Transform;Visual navigation;Target grasping;humanoid robot
    Date: 2013
    Issue Date: 2013-04-13 11:59:54 (UTC+8)
    Abstract: 本論文運用兩個單板電腦PICO820和Roborad-100及兩個網路攝影機C905(其可辨識距離約4米),實現加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。其中目標被放置於視覺系統可辨識距離之外(例如,約10米)的未知的三維位置。
    最常見也最為經典的長廊之視覺導引即是應用Hough Transform (HT)以進行其直線邊緣之偵測,並沿著所偵測的直線導引人形機器人行走。因此也在相同環境中進行加強型SIFT(即以SIFT辨識相關之地標及以類神經網路之三維定位)與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。最後,各以兩個實驗比較相關優劣。
    This thesis realizes the humanoid robotic system to execute the target grasping (TG) in the unknown 3-D world coordinate, which is far away from the recognizable distance of vision system or is invisible by the block of building. Suitable landmarks with known 3-D world coordinates are arranged to appropriate locations or learned along the path of experimental environment. Before detecting and recognizing the landmark (LM), the HR is navigated by the pre-planned trajectory to reach the vicinity of arranged LMs. After the recognition of the specific LM via scale-invariant feature transform (SIFT), the corresponding pre-trained multilayer neural network (MLNN) is employed to on-line obtain the relative distance between the HR and the specific LM. Based on the modification of localization through LMs and the target search, the HR can be correctly navigated to the neighbor of the target. Because the inverse kinematics (IK) of two arms is time consuming, another off-line modeling by MLNN is also applied to approximate the transform between the estimated ground truth of target and the joint coordinate of arm. Finally, the comparisons between the so-called enhanced SIFT and traditional Hough transform (HT) for the detection of straight line to navigate the HR the execution of target grasping confirm the effectiveness and efficiency of the proposed method.
    Appears in Collections:[電機工程學系暨研究所] 學位論文

    Files in This Item:

    File SizeFormat

    All items in 機構典藏 are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback