While perception has traditionally served action in robotics, it has been argued for some time that intelligent action generation can benefit perception, and carefully coupling perception with action can improve the performance of both. In this talk, I will report on recent progress in model-based and learning-based approaches that address aspects of the problem of closing perception-action loops.
The first part of my talk will focus on a model-based, active perception technique that optimizes trajectories for self-calibration. This method takes into account motion constraints and produces an optimal trajectory that yields fast convergence of estimates of the self-calibration states and other user-chosen states. In the second part of my talk, I will present a deep reinforcement learning framework that learns manipulation skills on a real robot in a reasonable amount of time. The method handles contact and discontinuities in dynamics by combining the efficiency of model-based techniques and the generality of model-free reinforcement learning techniques.