淡江大學機構典藏:Item 987654321/74711
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62830/95882 (66%)
Visitors : 4042375      Online Users : 1023
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/74711


    Title: 基於影像特徵點之全方位視覺里程計的設計
    Other Titles: Image feature-based omni-directional visual odometry design
    Authors: 羅一喬;Luo, I-Chiao
    Contributors: 淡江大學電機工程學系碩士班
    翁慶昌;Wong, Ching-Chang
    Keywords: 視覺里程計;全方位視覺系統;特徵點匹配;Visual odometry;Omni-directional vision system;Feature matching;SURF
    Date: 2011
    Issue Date: 2011-12-28 19:22:09 (UTC+8)
    Abstract: 本論文提出一個基於影像特徵點之全方位視覺里程計的設計與實現方式。主要有4個部分:(1)距離模型的建立,(2)環境特徵點的選取,(3)特徵點匹配(Feature Matching)與(4)視覺里程計輸出。在距離模型的建立上,本論文以有理函數插值法來取代傳統方法,此方法可以降低全方位視覺之取樣點,並得到精確至像素之全方位視覺距離模型。在環境特徵點的選取上,由於加速強健特徵點(Speed Up Robust Feature, SURF)演算法可得到大量強健特徵點之特性,本論文用SURF演算法來得到影像中每一訊框之環境特徵。在特徵點匹配上,本論文提出一個主維度優先搜尋法(Main Dimensional Priority Search)來取代傳統K維樹法(K-Dimensional Tree)。在視覺里程計輸出上,本論文以動態估測(Motion Estimation)之步驟來計算機器人之動態與相對移動數值,得到視覺里程計之輸出。從實驗結果可知,在移動與旋轉的整體效能上,本論文所提出之視覺里程計確實較輪型里程計為佳,而且視覺里程計對於地形與摩擦力並不敏感,所以本論文所提出之視覺里程計可以取代輪型里程計來作為感測器融合之感測資訊用。
    An image feature based omni-directional visual odometry is designed and implemented in this thesis. There are four parts: (1) Distance model building, (2) Feature extraction, (3) Feature matching, and (4) Output of visual odometry. In the distance model building, a rational function interpolation method is used to build the distance model. In comparison with the traditional model regulated method, it can reduce the sampling points of model to get the precise distance between pixels of the omni-directional visual models. In the feature extraction, the SURF (Speed Up Robust Feature) algorithm is used to get environmental features of each frame, because it can detect a lot of features and these features are robust. In the feature matching, a new matching method called the main dimensional priority searching method is proposed. In comparison with the K-dimensional tree searching method, it can remove the step of building searching tree so that the matching speed will increase by 450%. In the output of visual odometry, the motion estimation method is used to calculate the relative movement of the robot motion and information. From experimental results, we can see the overall performance of the proposed visual odometry is better than that of the traditional wheeled odometry.
    Appears in Collections:[Graduate Institute & Department of Electrical Engineering] Thesis

    Files in This Item:

    File SizeFormat
    index.html0KbHTML226View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback