English  |  正體中文  |  简体中文  |  Items with full text/Total items : 51296/86412 (59%)
Visitors : 8173447      Online Users : 51
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/59873


    Title: Applying Multiple KD-Trees in High Dimensional Nearest Neighbor Searching
    Authors: Yen, Shwu Huey;Shih, Chao Yu;Li, Tai Kuang;Chang, Hsiao Wei
    Contributors: 淡江大學資訊工程學系
    Keywords: feature matching;nearest neighbor searching (NNS);kd-tree;backtracking;best-bin-first;projection.
    Date: 2010-09
    Issue Date: 2013-06-07 10:47:39 (UTC+8)
    Publisher: Sofia: North Atlantic University Union
    Abstract: Feature matching plays a key role in many image
    processing applications. To be robust and distinctive, feature vectors
    usually have high dimensions such as in SIFT (Scale Invariant Feature
    Transform) with dimension 64 or 128. Thus, accurately finding the
    nearest neighbor of a high-dimension query feature point in the target
    image becomes essential. The kd- tree is commonly adopted in
    organizing and indexing high dimensional data. However, in searching
    nearest neighbor, it needs many backtrackings and tends to make
    errors when dimension gets higher. In this paper, we propose a
    multiple kd-trees method to efficiently locate the nearest neighbor for
    high dimensional feature points. By constructing multiple kd-trees, the
    nearest neighbor is searched through different hyper-planes and this
    effectively compensates the deficiency of conventional kd-tree.
    Comparing to the well known algorithm of best bin first on kd-tree, the
    experiments showed that our method improves the precision of the
    nearest neighbor searching problem. When the dimension of data is 64
    or 128 (on 2000 simulated data), the average improvement on
    precision can reach 28% (compared under the same dimension) and
    53% (compared under the same number of backtrackings). Finally, we
    revise the stop criterion in backtracking. According to the preliminary
    experiments, this revision improves the precision of the proposed
    method in the searching result
    Relation: International Journal of Circuits, Systems and Single Processing 4(4), pp.153-160
    Appears in Collections:[資訊工程學系暨研究所] 期刊論文

    Files in This Item:

    File SizeFormat
    1998-4464_4(4)p153-160.pdf1854KbAdobe PDF111View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback