本文旨在應用臉部特徵偵測追蹤系統改良盲多重障礙者溝通輔具，以協助盲多重障礙者在日常生活中無法透過言語或文字等方式對親人或外界做溝通之困難處境的問題。 研究中首先藉由筆記型電腦上的Webcam鏡頭擷取出頭部擺動的影像，透過Haar演算法進行臉部特徵之鼻子偵測與追蹤，並藉由鼻子中心位置移動計算頭部轉動角度並辨認擺動方向作拼音、組字與除錯等運算，最後結合語音合成編輯系統進行發聲或文字輸出，完成應用臉部特徵偵測追蹤系統於盲多重障礙者溝通輔具改良之研究。 本文研究成果可改善盲多重障礙者直接利用臉部特徵偵測追蹤系統執行語音合成發聲溝通，除了增加盲多重障礙者在日常生活使用溝通輔具的便利性外，亦可協助改善盲多重障礙者與親人及外界做溝通上的困境，並可創造盲多重障礙者更多自我學習能力和良好的獨立自主生活。 The object of this thesis is to improve the assistive communication device by facial feature detection and tracking systems to solve the communication problems without words in daily lives for visually impaired people with multiple disabilities. In the study, the images of head shaking are captured into a notebook through a webcam. The facial feature (nose) is detected and tracked by the Haar algorithm. The moving position of the nose center is computed to identify the directions of head shaking, and then the communication operations of Pinyin, associating Chinese character, and debugging error are processed. Finally, the voice communication and text output are achieved by the combined text-to-speech editor. The improvement of the assistive communication device is accomplished for visually impaired people with multiple disabilities using facial feature detection and tracking systems. The results of this thesis can improve that visually impaired people with multiple disabilities directly operate the facial feature detection and tracking systems for voice communication. In addition to increasing convenience of assistive communication in daily life, it can also improve the communication with the outside world, the better learning ability and living independently for visually impaired people with multiple disabilities.