Abstract: 3D reconstruction from two or more images is one of
the most well-studied problems in computer vision. Due to the inverse
nature of the problem, the reconstructed models typically suffer from
various errors. In this talk, I will distinguish between two types of
uncertainty that can cause these errors, namely correspondence and
geometric uncertainty. The former refers to the uncertainty in
determining the correct match for a given pixel while the latter refers
to the uncertainty in the coordinates of the reconstructed 3D point,
assuming that correct correspondences have been established. Based on
this analysis, I will present an approach for depth map fusion and a
solution to the next-best-view problem in target localization that
benefit from explicit uncertainty modeling.