淡江大學機構典藏:Item 987654321/35225
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 64185/96962 (66%)
Visitors : 12702869      Online Users : 3151
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/35225


    Title: 高維空間中使用多棵K-D Trees搜尋最近鄰居
    Other Titles: Nearest neighbor searching in high dimensions using multiple K-D Trees
    Authors: 石兆宇;Shih, Chao-yu
    Contributors: 淡江大學資訊工程學系碩士班
    顏淑惠;Yen, Shew-huey
    Keywords: 特徵比對;最近鄰居搜尋;K-D樹;回溯;Feature matching;Nearest neighbor search (NNS);K-D tree;Backtrack
    Date: 2009
    Issue Date: 2010-01-11 06:13:58 (UTC+8)
    Abstract: 特徵比對在許多影像處理的應用中扮演著一個重要的角色,例如全景影像的產生、影像縫合、物件追蹤…等等。所謂的特徵比對即是在兩張不同的影像中找出最相似的特徵點組合,也就相當於找特徵點的最近鄰居。

    在本篇論文提出了一個尋找最近鄰居的方法,利用將資料投射到不同的平面進行粗分類,同時建構數棵互補的K-D Tree進行最近鄰居的搜尋,且使用Best Bin First(BBF)的回溯機制平均的在各棵K-D Tree上搜尋以降低搜尋時間並提升正確率。文中以演算法分析時間複雜度,最後再以實驗證明我們的分析和執行的效能。
    Feature matching plays an important role in many image processing applications. To match a feature point in the query image is to find a correspondent feature point (i.e., the nearest neighbor) extracted from the target image. Edges, corners and DoG (difference of Gaussian) are most common adopted features in recent year. How to represent features is also very important in which SIFT (Scale Invariant Feature Transform) is one of the state-of-art. SIFT, proposed by Lowe [1], usually results feature vectors with high dimension such as 128. Thus how to efficiently find the nearest neighbor of a query feature point in the target image becomes essential. Kd- tree is often used to find the nearest neighbor in dimension higher than two. However, kd- tree needs many backtrackings to find the nearest neighbor when dimension gets higher. In this paper, we propose a multiple kd-trees method to efficiently search the nearest neighbor for high dimension feature points. First, we project feature points to three hyper-planes which have greatest variances. Second, for points projected on each splitting hyper-plane, two kd-trees are built that one is the conventional kd-tree and the other has first split on the hyper-plane with second largest variance. So total of six kd-trees are built. By this way, these kd-trees are searched for the nearest neighbor through different hyper-planes to compensate the deficiency of projection. The experiment showed that our method, under the same number of backtracks, indeed improves the accuracy rate of searching the nearest neighbor.
    Appears in Collections:[Graduate Institute & Department of Computer Science and Information Engineering] Thesis

    Files in This Item:

    File SizeFormat
    0KbUnknown447View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback