This event will be in-person ONLY in Wu and Chen Auditorium.
Prevailing models in robotics reason about the world either as images (end-to-end learning approaches) or as a collection of rigid objects (classical approaches), but neither have proven to be suitable abstractions for manipulating cloth, ropes, piles of objects, plants, and natural terrain. My lab is investigating novel representations of “stuff” that are built de novo from visual and tactile perception data, whose properties are learned continuously through interaction. Volumetric Stiffness Fields, Graph Neural Networks, Neural Dynamics, and 3D metric-semantic maps are examples of models that allow robots to learn about their environment without having preconceived notions of individual objects, their physical properties, or how they interact. For a variety of domains and materials, these techniques are able to model complex interactions, uncertainty, and multi-modal correlations between appearance and physical properties. Applications will be shown in agriculture, construction, and household object manipulation.
(This talk solely represents the research and opinions of Dr. Hauser under his UIUC affiliation, and does not communicate any results, statements, or opinions on behalf of Samsung Research America, Samsung Electronics, or any of its affiliates.)