Master's Student Learning Seminar

On Characterizing the Capacity of Neural Networks using Algebraic Topology

Speaker: William Guss, Carnegie Mellon University

Location: Warren Weaver Hall 517

Date: Thursday, April 19, 2018, 4 p.m.


The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this talk, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias. After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision region is a strictly limiting factor in its ability to generalize. We then provide the first empirical characterization of the topological capacity of neural networks. Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions. This observation allows us to connect existing theory to empirically driven conjectures on the choice of architectures for fully-connected neural networks. Finally, we provide some first steps in building a general theory of neural homology.


Bio: William Guss --
William Guss is a PhD candidate in the Machine Learning Department at CMU and co-founder of Infoplay AI. He is advised by Dr. Ruslan Salakhutdinov and his research spans reinforcement learning, natural language processing, and deep learning theory, particularly at the intersection of algebraic topology, computational geometry, and learning theory. William completed his bachelors in Pure Mathematics at UC Berkeley where he was awarded the Regents' and Chancellor's Scholarship. During his time at Berkeley, William received the Amazon Alexa Prize Grant for the development of conversational AI and co-founded Machine Learning at Berkeley.


Link to the paper: