Abstract
Building autonomous machines that are generalist instead of being restricted to narrow domains is a key challenge in both artificial intelligence and robotics. In this talk, I will argue that building model-based agents that learn a predictive model of their environment may be a feasible path towards machines that can generalize and adapt, thanks to the reusable knowledge learned by such a predictive model. I will then talk about ways of improving current model-based reinforcement learning agents, including ways to scale data collection with self-supervised exploration as well as large human-collected datasets, building better predictive models that work directly with visual observations, and designing algorithms for long-term planning in such agents.