- This event has passed.
[VIRTUAL] Spring 2021 GRASP SFI: Anusha Nagabandi, UC Berkeley, “Model-Based Deep RL for Robotics”
February 17, 2021 @ 3:00 pm - 4:00 pm
Deep learning has shown promising results in robotics, but we are still far from having intelligent systems that can operate in the unstructured settings of the real world, where disturbances, variations, and unobserved factors lead to a dynamic environment.
In the first part of the talk, I will show that model-based deep RL can indeed allow for efficient skill acquisition, as well as the ability to repurpose models to solve a variety of tasks. I will then scale up these approaches to enable locomotion with a 6-DoF legged robot on varying terrains in the real world, as well as dexterous manipulation with a 24-DoF anthropomorphic hand in the real world.
In the second part of the talk, I will focus on the inevitable mismatch between an agent’s training conditions and the test conditions in which it may actually be deployed, thus illuminating the need for adaptive systems. Inspired by the ability of humans and animals to adapt quickly in the face of unexpected changes, I will present a meta-learning algorithm within this model-based RL framework to enable online adaptation of large, high-capacity models using only small amounts of data from the new task. These fast adaptation capabilities are seen in both simulation and the real-world, with experiments such as a 6-legged robot adapting online to an unexpected payload or suddenly losing a leg. Finally, I will further extend the capabilities of our robotic systems by enabling the agents to reason directly from raw image observations. Bridging the benefits of representation learning techniques with the adaptation capabilities of meta-RL, I will present a unified framework for effective meta-RL from images. With robotic arms in the real world that learn peg insertion and ethernet cable insertion to varying targets, I will show the fast acquisition of new skills, directly from raw image observations in the real world.
I conclude that model-based deep RL provides a framework for making sense of the world, thus allowing for reasoning and adaptation capabilities that are necessary for successful operation in the dynamic settings of the real world.