Abstract: As robotic systems move from
well-controlled settings to increasingly unstructured
environments, they are required to operate in highly
dynamic and cluttered scenarios. Finding an object,
estimating its pose, and tracking its pose over time
within such scenarios are challenging problems. Although
various approaches have been developed to tackle these
problems, the scope of objects addressed and the
robustness of solutions remain limited. In this talk, I
will present a robust object perception using visual
sensory information, which spans from the traditional
monocular camera to the more recently emerged RGB-D
sensor, in unstructured environments. I will address
four important challenges, such as significant clutter
in backgrounds, objects with and without texture, object
discontinuous cases during tracking, and real-time
constraints, to robust 6-DOF object pose estimation and
tracking that current state-of-the-art approaches have,
as yet, failed to solve. Various object pose estimation
and tracking examples will be shown with several
applications in robotics.