For successful integration of autonomous systems such as drones and self-driving cars in our day-to-day life, they must be able to quickly adapt to ever-changing environments, and actively reason about their safety and that of other users and autonomous systems around them. Even though control-theoretic approaches have been used for decades now for the control and safety analysis of autonomous systems, these approaches typically operate under the assumption of a known system dynamics model and the environment in which the system is operating. To overcome these challenges, machine learning approaches have been explored to operate autonomous systems intelligently and reliably in unpredictable environments based on prior data. However, learning techniques widely used today are extremely data inefficient, making it challenging to apply them to real-world physical systems. Moreover, they lack the necessary mathematical framework to provide guarantees on correctness, causing safety concerns as data-driven physical systems are integrated in our society.
In this talk, we will present a toolbox of methods combining robust optimal control with data-driven techniques inspired by machine learning, to enable performance improvement while maintaining safety. In particular, we design modular architectures that combine system dynamics models with modern learning-based perception approaches to solve challenging perception and control problems in a priori unknown environments in a data-efficient fashion. These approaches are demonstrated on a variety of ground robots navigating in unknown buildings around humans based only on onboard visual sensors. Next, we discuss how we can use optimal control methods not only for data-efficient learning, but also to monitor and recognize the learning system’s failures, and to provide online corrective safe actions when necessary. This allows us to provide safety assurances for learning-enabled systems in unknown and human-centric environments, which has remained a challenge to date.