This is an in-person event ONLY in AGH 306.
The future of AI is embodied — imagine intelligent agents that can navigate and manipulate the world, from robot assistants helping around the home to autonomous vehicles taking you anywhere safely. To act in the physical world, these agents must do more than process raw sensory inputs: they must reason about the underlying, dynamic world that gives rise to their observations. This requires a 4D scene model — 3D geometry evolving over time — that is object-centric, predictive, and grounded in language. Such representations enable agents to answer questions like “Where am I?”, “What is around me?”, and “How can I interact with this object?”
In this talk, I will advocate for an explicit, amodal representation of world geometry and objects learned from unlabeled sequences. Such a model supports robust perception in dynamic environments and enables language-driven interaction with the world. I will outline a blueprint for building such systems, centered around two complementary components: a slow video object mining method that discovers and tracks arbitrary objects in unlabeled videos, and a fast feed-forward network that learns from these tracks to detect, segment, complete, and forecast object trajectories.
I will trace the progression from early methods for self-supervised object discovery and detection, to recent models capable of promptable, text-conditioned 4D segmentation and amodal scene completion. Taken together, these components form a scalable recipe for learning object-centric 4D representations directly from raw video — a step toward grounded, general-purpose world understanding.