AFOSR/AFRL Center of Excellence on Rocket Combustor Dynamics (PI: Karthik Duraisamy)

This project seeks to advance the state-of-the-art in data-driven Reduced Order Models (ROMs) and enable efficient prediction of transient events leading to the onset of instabilities in liquid fueled rocket combustion engines. Outcomes include the development of a simulation suite composed of ROMs of variable/adaptive fidelity derived from an organized hierarchy of simulations. Innovations in ROMs include novels ways of inferring the structure, closure modeling and the use of machine learning to accelerate execution. This project includes collaborators at MIT and Purdue.

Funded by AFOSR/AFRL
Term: 2017-2023

Computational framework for data-driven, predictive, multi-scale and multi-physics modeling of battery materials (PI: K. Garikipati, co-PIs: V. Gavini, K. Duraisamy)

This project seeks to advance a more complete and quantitative understanding of the physics of battery materials. It will enable battery design for performance (specific capacity, specific power density, charge/discharge rates etc.), thermal management, prevention of mechanical degradation, and closed-loop control, guided by computational modeling. This project will combine new ideas of machine learning with the development of advanced computational methods to enhance the ability of the computational materials physicist to predict battery properties with quantitative accuracy. The machine learning programs will link the simulation codes at the various scales using large data sets from lower scale calculations to inform the models at each scale.

Funded by Toyota Research Institute
Term: 2017-2021

A Diagnostic Modeling Methodology for Dual Retrospective Cost Adaptive Control of Complex Systems (PI: Dennis Bernstein; co-PIs: M. Gamba, K. Duraisamy)

This project will extend data-driven adaptive control to systems that are beyond the capability of traditional adaptive control algorithms due to the extreme complexity of the physics. This will be accomplished by developing, demonstrating, and validating a novel diagnostic modeling methodology that that is  based on limited sensor data to uncover the essential dynamics of the system.  This project combines multidisciplinary expertise in adaptive control and system identification; computational fluid dynamics and data-driven modeling; and combustion dynamics and physics-guided diagnostics. The venue for developing, demonstrating, and validating the proposed diagnostic modeling methodology is experimental control of instability in lean premixed combustion.

Funded by NSF
Term: 2017-2023

SAFARI – Secure Automation for Advanced Reactor Innovation (UM PI: A. Manera)

This center-scale project develops AI-enhanced digital twins of nuclear reactors. The PI of the project is Prof. Annalisa Manera (UM Nuclear Engineering) and the project includes partners at Argonne National Lab, Idaho National Lab, and the engineering firms Kairos Power and Curtiss Wright. The team will validate and demonstrate their approach using the experimental flow loop. The loop runs molten salt as the coolant, emulating cooling loops inside an advanced reactor type called a molten salt reactor. Then, the software will be used to optimize the design of the Kairos Power fluoride-salt-cooled, high-temperature reactor.

Funded by ARPA-E
Term: 2021-2024

Scalable Environment for Quantification of Uncertainty and Optimization in Industrial Applications: (U-M PI: K. Duraisamy).

This project develops an integrated plan for performing uncertainty quantification (UQ) and design under uncertainty (DUU) that aggressively pursues new frontiers in scale and complexity. In particular, this project will create advancements in scalable forward and inverse UQ algorithms and the rigorous quantification of model inadequacy using data-driven approaches. This project provides a  foundation for the development of generalized stochastic design approaches that address the robustness and reliability of complex multi-disciplinary systems. This is a collaborative effort with Sandia National Laboratories, Stanford University and Colorado School of Mines.

Funded by DARPA
Term: 2015-2018

Conflux, A Novel Platform for Data-Driven Computational Physics (PI: K. Duraisamy, co-PIs: K. Garikipati, B. Mozafari, A. Figueroa, G. Evrard)

This project develops a hardware/software ecosystem called ConFlux, specifically designed to enable High Performance Computing (HPC) clusters to communicate seamlessly and at interactive speeds with data-intensive operations. The project establishes a hardware and software ecosystem to enable large scale data-driven modeling of multiscale physical systems. ConFlux will produce advances in predictive modeling in several disciplines including turbulent flows, materials physics, cosmology, climate science and cardiovascular flow modeling. These applications require HPC applications (running on external clusters) to interact with large data sets at run time. ConFlux provides low latency communications for in- and out-of-core data, cross-platform storage, as well as high throughput interconnects and massive memory allocations. The file-system and scheduler natively handle extreme-scale machine learning and traditional HPC modules in a tightly integrated workflow—rather than in segregated operations–leading to significantly lower latencies, fewer algorithmic barriers and less data movement.

Funded by NSF
Term: 2015-2019

Formalisms and Tools for Data-driven Turbulence Modeling (PI: K. Duraisamy)

The goal of this research is to devise rigorous mathematical techniques that utilize experimental and simulation data  to develop predictive models of turbulent flow. An important aspect of this approach is that the data is processed in the context in which it is needed for prediction. Domain-specific machine learning techniques are used to convert information to modeling knowledge. In essence, the inverse solution infers functional deficiencies in the model and machine learning is used to reconstruct the missing functional form. Objectives include investigation of how to identify and formulate a properly-posed data-driven-turbulence-modeling problem, the implications that these approaches have in more general data-driven computational physics applications, and the most effective ways to use machine learning in a predictive physics setting. Applications to be explored include transition to turbulence, thermal transport, and near-wall turbulent stress closures. The proposed work is expected to result in improved closure models for Reynolds-Averaged as well as hybrid Reynolds-Averaged/Large Eddy simulations.

Funded by NSF
Term: 2015-2019

Framework for Turbulence Modeling using Big Data (PI: K. Duraisamy)

This project develops a framework to utilize large-scale data for predictive modeling. It involves the development of domain-specific learning techniques suited for the representation of turbulence and its modeling, the establishment of a trusted ensemble of data for the creation and validation of new models, and the deployment of these models in complex aerospace problems.  This is a collaborative effort between the University of Michigan, Stanford University, Iowa State and Pivotal Inc., consulting with Boeing Commerical Airplanes and interacting with NASA Langley Research Center.

Funded by NASA
Term: 2014-2017

Integrated computational framework for designing dynamically controlled alloy-oxide heterostructures (PIs: E. Marquis & K. Garikipati)

This project will develop an openly distributable framework that rigorously integrates theory, experiment and computation to predict and elucidate the evolution of complex materials heterostructures. A central challenge is linking the electronic structure of the constituent chemistries of a complex materials system to its behavior at technologically relevant length and time scales.

Funded by NSF
Term: 2014-2017

Developing a Theory of Spatially Evolving Turbulence for Cardiovascular Flows (PIs: A. Figueroa, E. Johnsen, D. Dowling)

Turbulence is present in pathologic cardiovascular conditions such as aortic coarctations, aneurysms, and arterio-venous fistulas. As Direct numerical simulation of such flows is prohibitively expensive, there is a pressing need to develop turbulence models that take into account the complex spatially (and temporally) evolving nature of blood flow.

We have recently developed a mixed laminar-turbulent model that has the potential of being extrapolated for spatially evolving turbulent structures such as those seen in cardiovascular flows. If successful, this model would eventually enable the computation of complex blood flows in a clinically relevant timeframe in current hardware.

Funded by UM
Term: 2016-2017

Mechanisms Underlying the Progression of Large Artery Stiffness in Hypertension (PI: A. Figueroa)

Central artery stiffening is a well-established initiator and indicator of cardiovascular disease; it arises in hypertension, aging, diabetes, obesity, connective tissue disorders such as Marfan syndrome, organ transplantation, and the treatment of AIDS patients. Such stiffening contributes to heart disease and end-stage kidney failure. This project will use four diverse mouse models of hypertension and computer models to identify improved methods of diagnosing and treating central arterial stiffening.

Funded by NIH
Term: 2016-2020

Advancing Predictive Strategies for Wall-Bounded Turbulence by Fundamental Studies andData-driven Modeling (UM PI: K. Duraisamy)

This project is in collaboration with Prof. Durbin (Iowa State). We are looking to improve our physical understanding (via DNS and LES) and modeling of non-equilibrium wall-bounded turbulent flows of interest to the Navy. Both physics-based and data-enabled models are being explored.

Funded by ONR
Term: 2017-2021

Computational Analysis of Renal Flow Characteristics Following Treatment of Abdominal Aortic Coarctation (PI: A. Figueroa)

Aortic coarctation is an abnormal narrowing of the aorta that increases the resistance in the aorta and forces the heart to pump at elevated pressures to maintain sufficient perfusion of the vital organs. Aortic coarctations are the most common cause of hypertension in children. Multiple surgical options exist to alleviate the pressure gradient across the coarctation. However, postoperative hypertension is still commonly observed. The aim of this project is to study the hemodynamic outcomes of different surgical repair scenarios using patient-specific computational modeling. In the future, these models could aid preoperative surgical planning, improving postoperative outcomes.

Funded by NIH
Term: 2017-2018

Data-driven approaches for multiphase turbulence modeling (PI: Jesse Capecelatro)

This project seeks to advance the state-of-the-art in developing turbulence closures for multiphase flows. Traditional approaches rely on extensions from single-phase models, making them inadequate at characterizing the complex physics and capturing the span of regimes present in collisional fluid-particle flows. A combination of inverse modeling and sparse regression techniques are being developed to enable large-scale, highly-resolved datasets to be distilled into tractable, algebraic model closures. This data-driven approach is designed to provide predictions of turbulent particle-laden flows that remain accurate across vastly different flow regimes.

Funded by NSF GFRP
Term: 2017-2020

Deep Learning and Reduced Order Modeling for Automotive Aerodynamics (PI: K. Duraisamy)

We are developing and demonstrating new machine learning and reduced order modelling tools for vehicle aerodynamics. Models are trained on simulation data generated by tools used by GM. The goal is to develop a framework that aerodynamic engineers and body designers can use as a platform to study the impact of shape modifications on the aerodynamic forces and the flow field in near-real time. Specifically, convolutional neural networks are used to augment (and under the right circumstance, to replace) detailed CFD solutions of aerodynamic flows.

Funded by General Motors
Term: 2018-2021

Physics Inspired Learning and Learning the Order and Structure Of Physics (UM PI: A. Gorodetsky, Co-PI: K. Duraisamy)

We propose to develop novel machine learning and artificial intelligence algorithms that are ca- pable of learning and enforcing physics principles and constraints. Using modern sparse and low-multilinear-rank regression architectures and neural networks, a number of critical tasks can be enacted from data alone: (i) the discovery of first principles models, (ii) the identification of physical constraints and conservation laws, and (iii) improved models using known physics and enforcing known constraints. These architectures allow us to develop black box and gray box modeling strategies for complex systems where physics is unknown or only partially known. This project is in collaboration with the Univ of Washington

Funded by DARPA
Term: 2018-2020

Scale-resolving turbulence simulations through adaptive high-order discretizations & data-enabled model refinements (PI: K. Fidkowski, Co-PI: K. Duraisamy)

The proposed research seeks to improve the effectiveness of Hybrid RANS-LES (HRLES) methods through concurrent advances in numerical methods, error-estimation/adaptivity, and data-driven modeling. The specific project objectives are: 1. Formulation of consistent high-order discontinuous Galerkin methods to solve the governing equations of HRLES. 2. Derivation of error estimators to distinguish between modeling and numerical errors in under-resolved LES regions and development of h-p adaptive techniques to control numerical errors. 3. Formulation and solution of inverse problems to fundamentally characterize the source of modeling discrepancies in hybrid RANS-LES methods. 4. Use of data-driven techniques to improve length scale correlations in RANS regions and blending functions in interface regions. 5. Demonstration of the accuracy and robustness of the developed techniques on three-dimensional benchmark problems involving turbulent separation and heat transfer.

Funded by NASA
Term: 2018-2021

Data-Driven Modeling for Turbulence Transition in Mixing (PI: K. Duraisamy)

We are developing and applying data-driven methods to unsteady RANS simulations of turbulent mixing. Specifically, we intend to apply the field inversion/machine learning process to discover a
correction function for the k-L-a model in the simulation of one-dimensional Rayleigh-Taylor mixing layers.

Funded by Lawrence Livermore National Laboratories
Term: 2018-2019

Artificial Intelligence guided multi-scale multi-physics framework for discovering complex emergent materials phenomena (PIs: X. Huan, K. Garikipati)

We propose a Bayesian framework to develop new machine learning and operator inference methods to aid the discovery of physical phenomena and the prediction of material properties and responses.
We specifically target the challenges in material physics associated with systematic attempts to (a) abstract complexity from a hierarchy of scales into predictive model forms, and (b) delineate mechanisms of coupled materials physics. Our proposed project develops the following tasks that unite AI with the discovery ofemergent physics: (1) Scale bridging from quantum mechanics to continuum PDEs. (2) Physics discovery via system identification and operator inference. (3) Bayesian inference, and uncertainty quantification for learning from data and quantifying predictive quality. (4) Optimal experimental design for intelligent data acquisition and management toachieve efficient high-level learning.

Funded by DARPA
Term: 2019-2021

Data-driven fuel cell modeling for real time engine control (PI: J. Siegel, Co-PI: K. Duraisamy)

The goal of this research is to develop a data-driven real-time fuel cell modeling approach that is optimized for real-time operation (<=20ms) aimed at direct market engine control unit implementation. Toyota expects that the initial model calibration would be based on high fidelity 3D CFD models for operating conditions identified by Design of Experiment (DoE) methods.We propose to use Field Inversion and Machine Learning (FIML) to improve the accuracy of existing 1D PEM FC models. Once hardware is available, focused sets of real-world data can also be used to refine the model. Finally, once the algorithm is implemented within a production system, on-board model training can occur to account for manufacturing tolerances and component degradation over time.

Funded by Toyota Motors North America
Term: 2019-2021

Machine Learning for Cooling Pack Design Optimization for Electrified Vehicles (PI: K. Duraisamy)

machine learning methods will be applied to leverage the massive amount of CFD data available from existing front air inlet / cooling pack designs to create the needed ROM’s. The primary approach taken will be to develop a formal ROM by a) using machine learning methods to identify a basis that captures the necessary dynamics on a low dimensional space. This basis will be identified using linear and non-linear machine learning approaches, and b) developing ROMs by either projecting the Navier-Stokes equations onto this lower dimensional space, or by directly learning the ROM evolution on this low dimensional manifold. The output will be a set of ODE’s which can be easily incorporated into a system level model for design.

Funded by Ford Motors
Term: 2020-2022

MULTI-source LEarning-Accelerated Design of high-Efficiency multi-stage compRessor (MULTI-LEADER) (UM PI: K. Duraisamy)

The goal of this project is to accelerate and augment the multi-disciplinary detailed design of a more energy-efficient multi-stage compressors, via machine learning, with considerations of aerodynamics, structures and additive manufacturability. Current industrial practices for the design of multi-stage compressors involve simulation-based design optimization with successive levels of model fidelity, iteratively evaluated between distinct disciplines, one stage at a time to tackle the high dimensional design variations. This proposal addresses these key design challenges: (1) concurrent optimization of multiple stages under many non-linear constraints; (2) multitude of evaluation of high-fidelity and expensive solvers and their gradients during optimization convergence in high-dimensional design; (3) multi-disciplinary design to maximize aerodynamic performance while guaranteeing structural integrity and additive manufacturability; (4) utilization of multiple fidelity of solvers with disparate parameterization and modeling assumptions. This is in collaboration with UTRC, Univ of Maryland and Univ of Pennsylvania.

Funded by ARPA-E
Term: 2020-2022

Uncertainty Quantification of Microstructural Material Variability Effects (UM PI: K. Garikipati)

This project, in collaboration with Sandia National Laboratory, will develop data-driven models of continuum plasticity. The big data in this study will come from a very large number of crystal plasticity computations, and experiments, that take account of microstructural variability. A machine learning tier will connect these models with macroscale continuum plasticity code, and will incorporate bounds on uncertainty. This algorithmic framework will be developed on ConFlux.

Funded by Sandia National Labs
Term: 2016-2019