3D Multi-Camera Calibration

3D Multi-Camera Calibration with a Fractal Encoded Multi-Shape Target

Robust multi-camera calibration is a sought-after task that up to now has leveraged discreet camera model fitting from sparse target observations. Stereo systems, photogrammetry and light-field arrays have all demonstrated the need for geometrically consistent calibrations to achieve higher-levels of subpixel localization accuracy for improved depth estimation. This work presents a calibration framework that leverages multi-directional and diverse features to achieve improved dense calibrations of camera systems. We begin by presenting a 2D target that uses an encoded feature set, each with 12 bits of uniqueness for flexible patterning and easy identification. These feature sets combine orthogonal sets of straight and circular binary edges, along with gaussian peaks. Our proposed algorithm uses steerable filters for edge and corner feature localization technique, and an ellipsoidal 2D Gaussian peak fitting for the area features. Feature uniqueness is used for associativity across views, which is combined into a 3D pose graph for nonlinear optimization. Existing camera models are leveraged for initial distortion estimates and a dense remapping is achieved by augmenting the sparse rectification parameters with a pixel-wise rectification to minimize reprojection residuals.

Electronic Imaging 2021 Conference, 3D Imaging and Applications, “3D Multi-Camera Calibration with a Fractal Encoded Multi-Shape Target”, D. E. Meyer, P. Verma,  F. Kuester

UC San Diego