淡江大學機構典藏:Item 987654321/108932
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64178/96951 (66%)
Visitors : 9896099      Online Users : 19155
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/108932


    Title: Development of 3D Feature Detection and on Board Mapping Algorithm from Video Camera for Navigation
    Authors: Hao, Eng Zi;Srigrarom, Sutthiphong
    Keywords: 3D Point Cloud;Features Detection;Vision-based Navigation;MATLAB
    Date: 2016-03
    Issue Date: 2016-12-20 09:27:44 (UTC+8)
    Publisher: 淡江大學出版中心
    Abstract: In this paper, a different approach is introduced to produce comparable 3D reconstruction
    outcomes similar to that of working geometry method but not as computationally extensive as well as
    mathematically complex. An image pair, capturing the left and right view of the object or surrounding,
    is used as inputs. The analogy is very similar to how the human eye perceives the world. The 3D
    reconstruction program is broken down into two sections, with 3 MATLAB codes been written in total.
    First, to generate the image frames, followed by the second section, generating the 3D point cloud. In
    the first part of the program, 2 MATLAB codes have been written with the end result of estimated
    image frames between the two views which are not captured by the camera will be generated. In the
    second half of the program, the image pair is now processed to generate 3D point clouds containing 3D
    co-ordinates of the features. This techniques allows the partial reconstruction of a 3D environment by
    stitching together these image frames, thus creating a video of the environment as if the camera is
    moving from the left camera point to the right, giving the user the depth perception one would get
    when viewing it in real life. After which a 3D point cloud is generated, however to achieve this, the
    camera must first be calibrated to obtain the camera parameter with the aid of a checkerboard. The
    camera positions are also estimated and this is combined with the 3D co-ordinates of the features,
    producing the 3D point cloud. This will give the 3D co-ordinates of the features in an interactive 3D
    plot within MATLAB extracted from just a pair of input images.
    Relation: Journal of Applied Science and Engineering 19(1)pp.23-39
    DOI: 10.6180/jase.2016.19.1.04
    Appears in Collections:[Journal of Applied Science and Engineering] v.19 n.1

    Files in This Item:

    File SizeFormat
    index.html0KbHTML10View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback