Abstract: Context-awareness and instinctive response to visual stimuli in the environment are the new challenges for joint action of humans with technical systems. I will present how we use the sensor information from monocular, binocular and lidar systems to control technical systems in a wide range of applications. The areas of research include medical robots for minimally invasive surgery, humanoids, and mobile/flying systems. The sensor data is processed at different levels of abstraction allowing implementations on systems with strongly varying processing power and cycle time requirements ranging from a few hundreds of microseconds to hundreds of milliseconds. This allows to provide the necessary information at different levels of control from basic stabilization tasks that are necessary for flying systems, like blimps and quadrucopters, to advanced planning and localization systems that operate at a slower rate but require context knowledge and/or global information.
I will give an overview of my work at the Technical University Munich with a joint appointment at the Institute for Robotics and Mechatronics of the German Aerospace Agency (DLR) in Oberpfaffenhofen. My research involves endoscopic navigation for minimally invasive surgery on N-from-M manipulator systems, exploration tasks for humanoids and manipulator systems, navigation tasks on autonomous cars using a combination of a parallel lidar system with multi-focal cameras, and navigation and control of indoor and outdoor flying systems. The flying systems are used for vision-based 3D reconstruction tasks of historical sites like, e.g., the Castle Neuschwanstein.