Real-world robotics problems often occur in domains that differ significantly from the robot’s prior training environment. For many robotic perception tasks, real world experience is expensive to obtain, but data is easy to collect in either an instrumented environment or in simulation. However, perception models trained on such data often do not generalize to real-world environments. I will describe several recent approaches that we have developed to address this so-called domain shift problem. In particular, I will show that adversarial learning techniques can adapt visual representations learned on large easy-to-obtain source datasets (e.g. synthetic images) to a target real-world domain, without requiring expensive manual data annotation of real world data.