In this paper, a different approach is introduced to produce comparable 3D reconstruction
outcomes similar to that of working geometry method but not as computationally extensive as well as
mathematically complex. An image pair, capturing the left and right view of the object or surrounding,
is used as inputs. The analogy is very similar to how the human eye perceives the world. The 3D
reconstruction program is broken down into two sections, with 3 MATLAB codes been written in total.
First, to generate the image frames, followed by the second section, generating the 3D point cloud. In
the first part of the program, 2 MATLAB codes have been written with the end result of estimated
image frames between the two views which are not captured by the camera will be generated. In the
second half of the program, the image pair is now processed to generate 3D point clouds containing 3D
co-ordinates of the features. This techniques allows the partial reconstruction of a 3D environment by
stitching together these image frames, thus creating a video of the environment as if the camera is
moving from the left camera point to the right, giving the user the depth perception one would get
when viewing it in real life. After which a 3D point cloud is generated, however to achieve this, the
camera must first be calibrated to obtain the camera parameter with the aid of a checkerboard. The
camera positions are also estimated and this is combined with the 3D co-ordinates of the features,
producing the 3D point cloud. This will give the 3D co-ordinates of the features in an interactive 3D
plot within MATLAB extracted from just a pair of input images.
關聯:
Journal of Applied Science and Engineering 19(1)pp.23-39