English  |  正體中文  |  简体中文  |  Items with full text/Total items : 56420/90267 (63%)
Visitors : 11693026      Online Users : 47
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library & TKU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://tkuir.lib.tku.edu.tw:8080/dspace/handle/987654321/45289

    Title: Classification of Age Groups Based on Facial Features
    Authors: 洪文斌;Horng, Wen-bing;李正平;Lee, Cheng-ping;陳俊文;Chen, Chun-wen
    Contributors: 淡江大學資訊工程學系
    Keywords: Age Classification;Facial Feature Extraction;Neural Network
    Date: 2001-09-01
    Issue Date: 2010-03-26 18:55:55 (UTC+8)
    Publisher: 淡江大學
    Abstract: An age group classification system for gray-scale facial images is proposed in this paper. Four age groups, including babies, young adults, middle-aged adults, and old adults, are used in the classification system. The process of the system is divided into three phases: location, feature extraction, and age classification. Based on the symmetry of human faces and the variation of gray levels, the positions of eyes, noses, and mouths could be located by applying the Sobel edge operator and region labeling. Two geometric features and three wrinkle features from a facial image are then obtained. Finally, two back-propagation neural networks are constructed for classification. The first one employs the geometric features to distinguish whether the facial image is a baby. If it is not, then the second network uses the wrinkle features to classify the image into one of three adult groups. The proposed system is experimented with 230 facial images on a Pentium II 350 processor with 128 MB RAM.
    One half of the images are used for training and the other half for test. It takes 0.235 second to classify an image on an average. The identification rate achieves 90.52% for the training images and 81.58% for the test images, which is roughly close to human’s subjective justification.
    Relation: 淡江理工學刊=Tamkang journal of science and engineering 4(3), pp.183-192
    DOI: 10.6180/jase.2001.4.3.05
    Appears in Collections:[Graduate Institute & Department of Computer Science and Information Engineering] Journal Article

    Files in This Item:

    File Description SizeFormat
    1560-6686_4-3-5.pdf475KbAdobe PDF585View/Open

    All items in 機構典藏 are protected by copyright, with all rights reserved.

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library & TKU Library IR teams. Copyright ©   - Feedback