*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Thin Lens-based Geometric Surface Inversion for Multiview Stereo
Committee:
Dr. Anthony Yezzi, ECE, Chair , Advisor
Dr. Patricio Vela, ECE
Dr. Frank Dellaert, IC
Dr. Justin Romberg, ECE
Dr. Sung Ha Kang, Math
Abstract: A fully generative algorithm is developed for the reconstruction of dense three-dimensional shapes from scene images under varying viewpoints and levels of focus. Current state-of-the-art multiview methods are founded on a pinhole camera model that assumes perfectly focused images and thus fail when given defocused image data. The method developed herein overcomes this by instead assuming a thin lens which is able to accurately model defocus blur in images. While easily stated, this requires a significant mathematical reformulation from the bottom up as the simple perspective projection assumed by the pinhole model and utilized by current methods no longer applies under the more general thin lens model. New expressions for the forward modeling of image formation as well as model inversion are developed. For the former, image irradiance is related to scene radiance using energy conservation, and the resulting integral expression has a closed-form solution for in-focus points that is shown to be more general and accurate than the one used in current methods. For the latter, the sensitivities of image irradiance to perturbations in both the scene radiance and geometry are analyzed, and the necessary gradient descent evolution equations are extracted from these sensitivities. A variational surface evolution algorithm is then formed where image estimates generated by the thin lens forward model are compared to the actual measured images, and the resulting pixel-wise error is then fed into the evolution equations to update the surface shape and scene radiance estimates. This algorithm is experimentally validated for the case of piecewise-constant scene radiance on both computer-generated and real images, and it is seen that this new method is able to accurately reconstruct sharp object features from even severely defocused images and has an increased robustness to noise compared to pinhole-based methods.