Mathematics Colloquium

Scientific Uses of Automatic Differentiation

Speaker: Michael Brenner, Harvard

Location: Warren Weaver Hall 1302

Date: Monday, January 29, 2024, 3:45 p.m.

Synopsis:

There is much excitement (some of it legitimate) about applications of machine learning to the sciences. Here I’m going to argue that a primary opportunity is not machine learning per se, but instead that the tools underlying the ML revolution yield significant opportunities for scientific discovery. Primary among these tools is automatic differentiation and the scalability of codes. Neural network architectures are similar to time rollouts in dynamical systems, and therefore the technical advances underlying the ML have the potential to directly translate into the ability to solve important optimization problems in the sciences that have heretofore not been tackled. I will describe a number of different directions we have been undertaking using automatic differentiation and large scale optimization to solve science problems, including developing new algorithms for solving partial differential equations, the design of energy landscapes and kinetic pathways for self assembly, the design of fluids with designer rheologies, “optimal porous media”, learning the division rules for models of tissue development, efficient algorithms for finding unstable periodic orbits in turbulent flows as high order descriptors of turbulent statistics and the development of neural general circulation models for weather and climate (where the physics parameterizations in the GCM are learned from fitting against data).  If I have time, I'll also touch on the fact that the new computational tools suggest entirely new ways of finding approximate theoretical descriptions of solutions of nonlinear PDEs, and will point to a linear approximation of the Navier Stokes equation that captures unsteady flow past a moving body up to Reynolds number of O(800).