This paper presents an example-based learning approach for locating facial feature points of human faces. The proposed approach models the correlation between the facial feature points and the input texture information in a single, combined eigenspace, thus preserving the significant components of their dependency. Experiments show that the proposed approach accurately reconstructs the facial shape without the need to restore the texture information lost as a result of unfavorable occlusion or lighting conditions.