本論文使用微軟Kinect之RGB-D感測器,發展機器人同時定位與建圖演算法。研究分為四個階段:第一階段校準Kinect RGB-D感測器,包括RGB攝影機的內部參數校準,以及RGB鏡頭與深度鏡頭的歪斜校準。基於RGB-D感測器的同時定位與建圖之演算法在第二階段被建立與實測。第三階段將移動中建立場景的功能整合到同時定位與建圖任務中,以便建立環境模型。第四階段規劃系統的運算速度之改善程序。利用雲端運算的概念,將運算系統分為影像處理與狀態估測兩個運算程序。將影像處理程序保留在移動感測系統端,而狀態估測程序交由雲端運算伺服器進行處理。實測結果顯示使用本論文規劃的雲端運算程序,可以加快15%的運算速度。 This thesis presents an algorithm of robot simultaneous localization and mapping (SLAM) using a RGB-D sensor. This research consists of four stages: first, the Kinect RGB-D sensor is calibrated including the intrinsic parameters of RGB camera as well as the alignment of the RGB sensor and the depth sensor. The RGB-D SLAM is developed and implemented in indoor environments at the second stage. Third, the task of structure from motion (SFM) is integrated with the RGB-D SLAM to construct the environment model. Computational speed is improved at the last stage. The concept cloud computing is applied to the SLAM system by dividing the system into two procedures including image processing and state estimation. The procedure of image processing is remained at the mobile sensory system, while the state estimation is implemented by a cloud computing server. Experimental results show that the computational speed is increased 15% with the cloud computing.