There have been tremendous advances on applying deep learning techniques for 2d image understanding. In contrast, very little work has focused on employing deep learning for modeling datasets beyond 2D such as 3D geometry and 4D light fields. In this talk, I present several latest works from our group on in this exciting new arena, with a focus on their applications to virtual and augmented reality and computational photography. I first present a novel deep surface light field (DSLF) technique. A surface light field represents the radiance of rays originating from any points on the surface in any directions. Traditional approaches require ultra-dense sampling to ensure the rendering quality. Our DSLF works on sparse data and automatically filling in the missing data by leveraging different sampling patterns across the vertices and at the same time eliminates redundancies due to the network’s prediction capability. For real data, we address the image registration problem as well as conduct texture-aware remeshing for aligning texture edges with vertices to avoid blurring. Next, I present an end-to-end deep learning scheme to establish dense shape correspondences and subsequently compress 3d dynamic human bodies. Our approach uses sparse set of “panoramic” depth maps or PDMs, each emulating an inward-viewing concentric mosaics (CM). We then develop a learning-based technique to learn pixel-wise feature descriptors on PDMs. The results are fed into an autoencoder-based network to achieve ultra-high compression ratio.