Building autonomous machines that are generalist instead of being restricted to narrow domains is a key challenge in both artificial intelligence and robotics. In this talk, I will argue that building model-based agents that learn a predictive model of their environment may be a feasible path towards machines that can generalize and adapt, thanks to the reusable knowledge learned by such a predictive model. I will then talk about ways of improving current model-based reinforcement learning agents, including ways to scale data collection with self-supervised exploration as well as large human-collected datasets, building better predictive models that work directly with visual observations, and designing algorithms for long-term planning in such agents.
Oleh Rybkin is a Ph.D. student in the GRASP lab advised by Kostas Daniilidis. He is working on building agents that learn to reason about the future and plan actions. His recent work is on training self-supervised agents, achieving long-term reasoning through hierarchical visual planning, and learning shared representations of human and robot motion.