← Back to Publications

GANs and closures: Micro-macro consistency in multiscale modeling

Ellis R. Crabtree, Juan M. Bello-Rivas, Andrew L. Ferguson, and Ioannis G. Kevrekidis
Type: PublicationDate: September 30, 20232 min readStatus: Published

Multiscale Modeling & Simulation

Generative LearningGenerative Adversarial NetworksManifold learningDynamical SystemsEnhanced SamplingMolecular Dynamics
GANs and closures: Micro-macro consistency in multiscale modeling

GANs and closures: Micro-macro consistency in multiscale modeling

Research Summary

Sampling the phase space of molecular systems—and, more generally, of complex systems effectively modeled by stochastic differential equations (SDEs)—is a crucial modeling step in many fields, from protein folding to materials discovery. These problems are often multiscale in nature: they can be described in terms of low-dimensional effective free energy surfaces parametrized by a small number of “slow” reaction coordinates; the remaining “fast” degrees of freedom populate an equilibrium measure conditioned on the reaction coordinate values. Sampling procedures for such problems are used to estimate effective free energy differences as well as ensemble averages with respect to the conditional equilibrium distributions; these latter averages lead to closures for effective reduced dynamic models. Over the years, enhanced sampling techniques coupled with molecular simulation have been developed; they often use knowledge of the system order parameters in order to sample the corresponding conditional equilibrium distributions, and estimate ensemble averages of observables. An intriguing analogy arises with the field of machine learning (ML), where generative adversarial networks (GANs) can produce high-dimensional samples from low-dimensional probability distributions. This sample generation is what in equation-free multiscale modeling is called a “lifting process”: it returns plausible (or realistic) high-dimensional space realizations of a model state, from information about its low-dimensional representation. In this work, we elaborate on this analogy, and we present an approach that couples physics-based simulations and biasing methods for sampling conditional distributions with ML-based conditional generative adversarial networks (cGANs) for the same task. The “coarse descriptors” on which we condition the fine scale realizations can either be known a priori or learned through nonlinear dimensionality reduction (here, using diffusion maps). We suggest that this may bring out the best features of both approaches: we demonstrate that a framework that couples cGANs with physics-based enhanced sampling techniques can improve multiscale SDE dynamical systems sampling, and even shows promise for systems of increasing complexity (here, simple molecules).