本論文發展人類與機器人之間的非言語互動行為,分為四個部份:首先利用膚色與動態的樣板橢圓偵測法,可以定位人臉在距離機器人不同距離的影像中區域位置;第二,從人臉區域中萃取包括瞳孔與眉毛兩端等共六個特徵點,並依據六個特徵點的幾何位置關係,設計模糊推論控制器判斷操作人員是否凝視機器人。第三,針對凝視機器人的操作人員,辨識其所指示的手勢命令。最後,根據操作人員凝視的狀態以及手勢命令的辨識,結合機器人的兩輪運動控制,執行操作人員所給予的動作命令,完成人類與機器人之間非語言的互動行為。 In this thesis, a non-verbal interaction system is developed for a robot to interact with human. The research project is composed of four parts: First, we detect human face in difference distance using the methods of skin color matching and ellipse template matching. Second, from the detected human face, we extract six face features, including two center points of pupils and four extreme points of two eyebrows. Furthermore, a fuzzy system is designed, according to the relative positions of six features, to determine whether the operator gazes at the robot. Third, if the operator turns his gaze to the robot, the robot system will recognize the gesture performed by the operator as a command. Finally, after the robot system obtains the command from the gaze recognition and gesture recognition, it will implement the command according to an algorithm of two-wheeled motion control, and complete the non-verbal interaction with human.