Venue: 2636 GGBA
Bio: Dr. Lianghao Cao is a Postdoctoral Scholar Research Associate from the Department of Computing and Mathematical Sciences at the California Institute of Technology. He obtained a B.S. in Engineering Mechanics from the University of Illinois at Urbana-Champaign and a Ph.D. in Computational Science, Engineering, and Mathematics from The University of Texas at Austin. His research blends mechanistic modeling, uncertainty quantification, and scientific machine learning to understand, enhance, and control the quality, validity, and reliability of simulation-based predictions of complex physical systems.
Abstract: This talk focuses on a derivative-informed supervised learning method for efficiently building machine learning surrogates of high-fidelity computational models, particularly those governed by parametric partial differential equations. Unlike the conventional supervised learning method that treats the model as a black box, our approach leverages additional model sensitivity information, extracted via solving forward or adjoint sensitivity equations. This sensitivity information is integrated into the surrogate’s architecture and training process based on rigorous error analysis. We refer to such a surrogate construction as DINO (derivative-informed neural operator).
DINO offers two key advantages over conventional surrogate construction. First, it significantly improves the cost-accuracy trade-off for a wide range of models, often by one to two orders of magnitude. Second, it directly controls the surrogate Jacobian (Fréchet derivative) errors, thus enhancing performance in surrogate-driven outer-loop problems that use gradient- and Hessian-based optimization algorithms. We demonstrate DINO’s capability to accelerate infinite-dimensional Bayesian inversion. First, we show that geometric MCMC driven by DINO achieves a 2–9x speed up in asymptotically exact posterior sampling. Second, we introduce LazyDINO, a DINO-driven measure transport method for amortized Bayesian inversion, which is one to two orders of magnitude more cost-efficient than competing methods.
This talk is based on joint work with Michael Brennan, Joshua Chen, Omar Ghattas, Youssef Marzouk, and Thomas O’Leary-Roseberry.