This was a hybrid event with in-person attendance in Wu and Chen and virtual attendance…
Robots are getting smarter at converting complex natural language commands describing household tasks into step-wise instructions. Yet, they fail to actually perform such tasks! A prominent explanation for these failures is the fragility and inability of the low-level skills (e.g., locomotion, grasping, pushing, object re-orientation, etc.) to generalize to unseen scenarios. In this talk, I will discuss a framework for learning low-level skills that surpasses limitations of current systems at tackling contact-rich tasks and is real-world-ready: generalizes, runs in real-time with onboard computing, and uses commodity sensors. I will describe the framework using the following case studies:
(i) a dexterous manipulation system capable of re-orienting novel objects.
(ii) a quadruped robot capable of fast locomotion and manipulation on diverse natural terrains.
(iii) learning from a few task demonstrations of an object manipulation task to generalize to new object instances in out-of-distribution configurations.