Graduation Year

2013

Document Type

Thesis

Degree

M.S.C.S.

Degree Granting Department

Computer Science and Engineering

Major Professor

Sudeep Sarkar

Abstract

This thesis describes a novel procedure for achieving full 3D reconstruction from multiple RGB-D cameras configured such that the amount of overlap between views is low. Overlap is used to describe the portion of a scene that is common in a pair of views, and is considered low when at most 50% of the scene is common. Compatible systems are configured such that interpreting cameras as nodes and overlap as edges, a connected undirected graph can be constructed. The fundamental goal of the proposed procedure is to calibrate a given system of cameras. Calibration is the process of finding the transformation from each camera's point of view to the reconstructed scenes global coordinate system. The procedure focuses on maintaing the accuracy of reconstruction once the system is calibrated.par

RGB-D cameras gained popularity from their ability to generate dense 3D images; however, individually these cameras can not provide full 3D images because of factors like occlusions from and a limited field of view. In order to successfully combine views there must exist common features that can be matched or prior heuristics pertaining to the environment that can be used to infer alignment. Intuitively, corresponding features exist in overlapping regions of views. Combining data from pairs of overlapping views would provide a more full 3D reconstructed scene. A calibrated system of cameras is susceptible to misalignment. Re-calibration of the entire system is expensive, and is unnecessary if only a small number of cameras became misaligned. Correcting misalignment is a much more practical approach for maintaing calibration accuracy over extended periods of time. par

The presented procedure begins by identifying the necessary overlapping pairs of cameras for calibration. These pairs form a spanning tree in which overlap is maximized; this tree is referred to as the alignment tree. Each pair is aligned by a two-phase procedure that transforms the data from the coordinate system of the camera at a lower level in the alignment tree to that of the higher. The transformation between each pair is catalogued and used to reconstruction of incoming frames from the cameras. Once calibrated, cameras are assumed to be independent and their successive frames are compared to detect motion. The catalogued transformations are updated on instances that motion is detected essentially correcting misalignment. \par

At the end of the calibration process the reconstructed scene generated from the combined data would contain relative alignment accuracy throughout all regions. Using this proposed algorithm reconstruction accuracy of over 90% was achieved for systems calibrated with the angle between the cameras 45 degrees or more. Once calibrated the cameras can observe and reconstruct a scene on every frame. This is reliant on the assumption that the cameras will be fixed; however, in a practical sense this cannot be guaranteed. Systems maintained over 90% reconstruction accuracy during operation with induced misalignment. This procedure also maintained the reconstruction accuracy from calibration during execution for up to an hour.

The fundamental contribution of this work is the novel concept of using overlap as a means of expressing how a group of cameras are connected. Building a spanning tree representation of the given system of cameras provides a useful structure for uniquely expressing the relationship between the cameras. A calibration procedure that is effective with low overlapping views is also contributed. The final contribution is a procedure to maintain reconstruction accuracy overtime in a mostly static environment.

Included in

Engineering Commons

Share

COinS