淡江大學機構典藏:Item 987654321/76915
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 62805/95882 (66%)
Visitors : 3928274      Online Users : 761
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://tkuir.lib.tku.edu.tw/dspace/handle/987654321/76915


    Title: 結合特徵排序的改良式浮動序列特徵擷取演算法
    Other Titles: Modified Sequential Floating Search Algorithm with a Novel Ranking Method
    Authors: 周建興;趙于翔
    Contributors: 淡江大學電機工程學系
    Keywords: 特徵選取;特徵排序;假特徵;機器學習;k最近鄰居法;feature selection;feature ranking;false feature;machine Learning;k-NN
    Date: 2011-08
    Issue Date: 2012-05-22 21:53:26 (UTC+8)
    Abstract: 在模式識別的研究領域中,特徵擷取演算法是一個十分重要的研究主 題。例如在DNA 序列的分析研究中,透過特徵擷取可以找到序列中可能導 致疾病的段落位置或是氨基酸種類;在microarray 的資料中選擇可能導致某 種疾病的基因;又譬如在文件分類的問題中,找出真正有助於分類的關鍵 字詞為何。此外,透過特徵擷取,不但可以選出使用者感興趣的特徵,還 可以減少分類器訓練與測試所花費的時間與運算量,以及資料儲存時所需 的容量。 在眾多的特徵擷取方法中,Sequential floating search (SFS) 可說是相 當知名且被廣泛使用的方法。在此計畫中,我們提出一個結合feature ranking 與SFS 的特徵擷取方法。在feature ranking 階段,我們使用假特徵 (false feature)的觀念,來排序出(rank)特徵的重要度,然後挑選出排名後重 要性較低的特徵,進行SFS 演算法。這樣的作法,一方面可以克服original SFS 演算法可能遭遇到的問題,另一方面還能夠擷取出更關鍵性的子特徵 集合。
    In issues of pattern classification, choice of a suitable feature selection method is often the key to success. A successful feature selection not only raises the classification accuracy but also extracts the critical features that users are concerned with. For instance, in the analysis research of DNA sequence, feature selection enables us to locate the segments on the sequence that may lead to certain diseases and types of amino acids; and to select the gene that may lead to certain diseases from the data of microarray. As another example in text categorization issues, feature selection enables us to extract the keywords that contributed to classification. In addition, using feature selection we can not only select the features that users are most interested in but also save time while training and testing of the classifier, as well as memory space for data storage. Of the various feature selection methods, sequential floating search (SFS) is well known and widely adopted. In this project, we propose a feature selection method combining feature ranking and SFS. In feature ranking, we adopted the new idea of false feature to rank features based on their importance, and applied SFS to features that are less important and of lower rank. By doing so, we not only overcame issues with the original SFS but also extracted more critical feature subsets.
    Appears in Collections:[Graduate Institute & Department of Electrical Engineering] Research Paper

    Files in This Item:

    There are no files associated with this item.

    All items in 機構典藏 are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback