*This was a HYBRID Event with in-person attendance in Levine 307 and Virtual attendance…
In this talk, I will provide a brief introduction about our recent progress in applying optimal control and deep reinforcement learning (RL) on legged robots in the real world. I will then dive into our recent work to bridge model-based safety-critical control and model-free RL on a highly nonlinear and complex system, such as a bipedal robot Cassie. Bridging model-based safety and model-free RL for dynamic robots is appealing since model-based methods are able to provide formal safety guarantees, while RL-based methods are able to exploit the robot agility by learning from the full-order system dynamics. I will discuss a new method to combine them by explicitly finding a low-dimensional model of the system controlled by a RL policy and applying stability and safety guarantees on that simple model.