Computer vision models trained on unparalleled amounts of data hold promise for making impartial, well-informed decisions in a variety of applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems. Visual recognition models have exhibited bias by inappropriately correlating age, gender, sexual orientation and race with a prediction. The downstream effects of such bias range from perpetuating harmful stereotypes on an unparalleled scale to increasing the likelihood of being unfairly predicted as a suspect in a crime (when face recognition, which is notoriously less accurate on Black than White faces, is used in surveillance cameras). In this talk, we'll dive deeper both into the technical reasons and the potential solutions for algorithmic fairness in computer vision. Among other things, we will discuss our most recent work (in submission) on training deep learning models that de-correlate a sensitive attribute (such as race or gender) from the target prediction.
Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with machine learning and human-computer interaction. She completed her PhD at Stanford University and her postdoctoral fellowship at Carnegie Mellon University. She has served as a Senior Program Committee member for WACV’16, CVPR’18 and CVPR’19, has organized 8 workshops and tutorials on large-scale recognition, and has given more than 50 invited talks at universities, companies, workshops and conferences. She was awarded the PAMI Everingham Prize in 2016 as one of the leaders of the ImageNet Large Scale Visual Recognition Challenge, the MIT Technology Review's 35-under-35 Innovator award in 2017 and was named one of Foreign Policy Magazine's 100 Leading Global Thinkers in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL foundation dedicated to increasing diversity and inclusion in AI. She co-founded the Stanford AI4ALL camp teaching AI for social good to high school girls (formerly "SAILORS") and the Princeton AI4ALL camp teaching AI technology and policy to underrepresented minority high school students.