*This was a HYBRID Event with in-person attendance for Heng Yang’s in-person talk in Levine 307 and Virtual attendance via Zoom Webinar
Geometric perception is the task of estimating geometric models (e.g., object pose and 3D structure) from sensor measurements and priors (e.g., point clouds and neural network detections). Geometric perception is a fundamental building block for robotics applications ranging from intelligent transportation to space autonomy. The ubiquitous existence of outliers —measurements that tell no or little information about the models to be estimated— makes it theoretically intractable to perform estimation with guaranteed optimality. Despite this theoretical intractability, safety-critical robotics applications still demand trustworthiness and performance guarantees on perception algorithms. In this talk, I present certifiable outlier-robust geometric perception, a new paradigm to design tractable algorithms that enjoy rigorous performance guarantees, i.e., they return an optimal estimate with a certificate of optimality for a majority of problem instances, but declare failure and provide a measure of suboptimality for worst-case instances. Particularly, I present two general-purpose algorithms in the certifiable perception toolbox: (i) an estimator that uses graph theory to prune gross outliers and leverages graduated non-convexity to compute the optimal model estimate with high probability of success, and (ii) a certifier that employs sparse semidefinite programming (SDP) relaxation and a novel SDP solver to endow the estimator with an optimality certificate or escape local minima otherwise. The estimator is fast and robust against up to 99% random outliers in practical perception applications, and the certifier can compute high-accuracy optimality certificates for large-scale problems beyond the reach of existing SDP solvers. I showcase certifiable outlier-robust perception on robotics applications such as scan matching, satellite pose estimation, and vehicle pose and shape estimation. I conclude by remarking opportunities for integrating certifiable perception with big data, machine learning, and safe control towards trustworthy autonomy.