English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 64178/96951 (66%)
造訪人次 : 9496601      線上人數 : 15597
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/108932


    題名: Development of 3D Feature Detection and on Board Mapping Algorithm from Video Camera for Navigation
    作者: Hao, Eng Zi;Srigrarom, Sutthiphong
    關鍵詞: 3D Point Cloud;Features Detection;Vision-based Navigation;MATLAB
    日期: 2016-03
    上傳時間: 2016-12-20 09:27:44 (UTC+8)
    出版者: 淡江大學出版中心
    摘要: In this paper, a different approach is introduced to produce comparable 3D reconstruction
    outcomes similar to that of working geometry method but not as computationally extensive as well as
    mathematically complex. An image pair, capturing the left and right view of the object or surrounding,
    is used as inputs. The analogy is very similar to how the human eye perceives the world. The 3D
    reconstruction program is broken down into two sections, with 3 MATLAB codes been written in total.
    First, to generate the image frames, followed by the second section, generating the 3D point cloud. In
    the first part of the program, 2 MATLAB codes have been written with the end result of estimated
    image frames between the two views which are not captured by the camera will be generated. In the
    second half of the program, the image pair is now processed to generate 3D point clouds containing 3D
    co-ordinates of the features. This techniques allows the partial reconstruction of a 3D environment by
    stitching together these image frames, thus creating a video of the environment as if the camera is
    moving from the left camera point to the right, giving the user the depth perception one would get
    when viewing it in real life. After which a 3D point cloud is generated, however to achieve this, the
    camera must first be calibrated to obtain the camera parameter with the aid of a checkerboard. The
    camera positions are also estimated and this is combined with the 3D co-ordinates of the features,
    producing the 3D point cloud. This will give the 3D co-ordinates of the features in an interactive 3D
    plot within MATLAB extracted from just a pair of input images.
    關聯: Journal of Applied Science and Engineering 19(1)pp.23-39
    DOI: 10.6180/jase.2016.19.1.04
    顯示於類別:[淡江理工學刊] 第19卷第1期

    文件中的檔案:

    檔案 大小格式瀏覽次數
    index.html0KbHTML8檢視/開啟

    在機構典藏中所有的資料項目都受到原著作權保護.

    TAIR相關文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - 回饋