Abstract: We live in a world of ubiquitous imagery, in which the number of images at our fingertips is growing at a seemingly exponential rate. These images come from a wide variety of sources, including mapping sites, webcams, and millions of photographers around the world uploading billions and billions of images to social media and photo-sharing websites, such as Facebook. Taken together, these sources of imagery can be thought of as constituting a distributed camera capturing the entire world at unprecedented scale, and continually documenting its cities, mountains, buildings, people, and events. This talk will focus on how we might use this distributed camera as a fundamental new tool for science, engineering, and environmental monitoring, and how a key problem is *calibration* — determining the geometry of each photo, and relating it to all other photos, in an efficient, automatic way. I will describe our ongoing work on using automated 3D reconstruction algorithms for recovering such geometry from massive photo collections, with the goal of using these photos the gain a better understanding of our world.