The world is structured in countless ways. When cognitive and machine models respect these structures, by factorizing their modules and parameters, they can achieve remarkable accuracy and generalization. For instance, the spatial factorization of convolutional networks (inspired by visual neuroscience), has led to enormous progress in machine abilities to transform and recognize visual input. In this talk, I will discuss our work investigating the factorizations of objects, physics, and events/modes in both humans and machines. I will show how to harness object and relational structure in the form of graph networks to improve machine generalization, how to harness physics to better explain creative human tool-use in a novel physical problem-solving environment, and how to harness events/modes to enable a robotic agent to plan with tools. To go from harnessing structure to discovering it, I will then talk about our initial steps into how event/mode structures can be learned — leading to improvements in few-shot learning and compositional dynamics modeling. By combining general structures that are true of the world with general-purpose methods for statistical learning, we can develop more robust and data-efficient machine agents, and better explain how natural intelligence learns so much from so little.