GRASP Special Seminar: Daphna Weinshall, Hebrew University of Jerusalem, "All Neural Networks are Created Equal"

ABSTRACT

One of the unresolved questions in deep learning is the nature of the solutions that are being discovered. We investigate the collection of solutions reached by the same network architecture, with different random initialization of weights and random mini-batches. These solutions are shown to be rather similar - more often than not, each train and test example is either classified correctly by all the networks, or by none at all.  Surprisingly, all the network instances seem to share the same learning dynamics, whereby initially the same train and test examples are correctly recognized by the learned model, followed by other examples which are learned in roughly the same order. When extending the investigation to heterogeneous collections of neural network architectures, once again examples are seen to be learned in the same order irrespective of architecture, although the more powerful architecture may continue to learn and thus achieve higher accuracy. This pattern of results remains true even when the composition of classes in the test set is unrelated to the train set, for example, when using out of sample natural images or even artificial images. To show the robustness of these phenomena we provide an extensive summary of our empirical study, which includes hundreds of graphs describing tens of thousands of networks with varying NN architectures, hyper-parameters and domains. We also discuss cases where this pattern of similarity breaks down, which show that the reported similarity is not an artifact of optimization by gradient descent. Rather, the observed pattern of similarity is characteristic of learning complex problems with big networks. Finally, we show that this pattern of similarity seems to be strongly correlated with effective generalization.

Presenter's biography

Daphna Weinshall received her PhD degree from Tel-Aviv University in 1986, working on models of evolution and population genetics. Her current research interests lie in the areas of computer vision and machine learning, including human vision and human learning, and specifically areas related to object recognition and high level vision. In recent years, she has focused her research on the use of deep learning techniques to advance visual object recognition by way of related questions, including novelty detection, small sample and one-shot learning, and knowledge transfer. She published about 130 papers in journals and peer reviewed conferences.