淡江大學機構典藏:Item 987654321/88081
English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 62805/95882 (66%)
造訪人次 : 3944481      線上人數 : 668
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/88081


    題名: 加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較
    其他題名: A comparison of vision-based autonomous navigation for target grasping of humanoid robot by enhanced SIFT and traditional HT algorithms
    作者: 蘇上凱;Su, Shang-Kai
    貢獻者: 淡江大學電機工程學系碩士班
    黃志良;Hwang, Chih-Lyang
    關鍵詞: 尺度不變特徵轉換;3D環境自主立體視覺;多層類神經網路建模;霍夫轉換;視覺導引;目標物抓取;人形機器人;SIFT;Active stereo vision for 3-D localization;Modeling using multilayer neural network;Hough Transform;Visual navigation;Target grasping;humanoid robot
    日期: 2013
    上傳時間: 2013-04-13 11:59:54 (UTC+8)
    摘要: 本論文運用兩個單板電腦PICO820和Roborad-100及兩個網路攝影機C905(其可辨識距離約4米),實現加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。其中目標被放置於視覺系統可辨識距離之外(例如,約10米)的未知的三維位置。
    首先經由攝影機擷取影像並傳輸至單板電腦PICO820以進行相關影像處理(例如,將彩色影像轉換為灰階影像,以利SIFT之運算並辨識相關之地標),並計算地標中心點之影像座標,將其輸入至事先學習好的類神經網路以獲取其相對應之世界座標,藉此估算人形機器人所在之絕對世界座標,經過與預先規劃的路徑比較後,機器人就可以自主地修正其所在位置,以導正到預先設定的路徑上。經過事先安排的地標,獲取其相關絕對世界座標,並經由特定目標之搜尋,以完成目標抓取之任務。此外,當機器人到達目標附近約12公分後,將目標所估算的世界座標輸入至事先學習好的類神經網路以估算其左右手之馬達角度,以利機器人進行目標抓取的動作。
    最常見也最為經典的長廊之視覺導引即是應用Hough Transform (HT)以進行其直線邊緣之偵測,並沿著所偵測的直線導引人形機器人行走。因此也在相同環境中進行加強型SIFT(即以SIFT辨識相關之地標及以類神經網路之三維定位)與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。最後,各以兩個實驗比較相關優劣。
    This thesis realizes the humanoid robotic system to execute the target grasping (TG) in the unknown 3-D world coordinate, which is far away from the recognizable distance of vision system or is invisible by the block of building. Suitable landmarks with known 3-D world coordinates are arranged to appropriate locations or learned along the path of experimental environment. Before detecting and recognizing the landmark (LM), the HR is navigated by the pre-planned trajectory to reach the vicinity of arranged LMs. After the recognition of the specific LM via scale-invariant feature transform (SIFT), the corresponding pre-trained multilayer neural network (MLNN) is employed to on-line obtain the relative distance between the HR and the specific LM. Based on the modification of localization through LMs and the target search, the HR can be correctly navigated to the neighbor of the target. Because the inverse kinematics (IK) of two arms is time consuming, another off-line modeling by MLNN is also applied to approximate the transform between the estimated ground truth of target and the joint coordinate of arm. Finally, the comparisons between the so-called enhanced SIFT and traditional Hough transform (HT) for the detection of straight line to navigate the HR the execution of target grasping confirm the effectiveness and efficiency of the proposed method.
    顯示於類別:[電機工程學系暨研究所] 學位論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    index.html0KbHTML192檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋