Analysis Seminar

Emergent Bottleneck Structure in Deep Neural Nets: a Theory of Feature and Symmetry Learning

Speaker: Arthur Jacot, New York University

Location: Warren Weaver Hall 1302

Date: Thursday, December 7, 2023, 11 a.m.

Synopsis:

Deep Neural Networks have proven to be able to break the curse of
dimensionality, and learn complex tasks on high dimensional data, such
as images or text. But we still do not fully understand what makes
this possible. To answer this question, I will describe the appearance
of a Bottleneck structure as the number of layers in the network
grows, where the network learns low-dimensional features in the middle
of the network. This allows the network to identify and learn
symmetries of the task it is trained on, without any prior knowledge.
This could explain the success of Deep Learning on image and text
tasks which feature many `hidden' symmetries.