A New Era of Data-Enabled Computational Science

The 2017 MICDE Symposium

Agenda

All times are noted in Eastern Daylight Time (GMT -4). Click item for details.

 
8:00 a.m. — Registration Opens

Light breakfast items and coffee

8:30 a.m. — Welcome, Eric Michielssen, Associate Vice President, Advanced Research Computing
MichielssenEric Michielssen is the Associate Vice President, Advanced Research Computing, the Louise Ganiard Johnson Professor of Engineering, and Professor of Electrical Engineering and Computer Science, U-M College of Engineering
8:45 a.m. — MICDE: A Defining Year, Krishna Garikipati
krishnaGarikipatiKrishna Garikipati is Director of the Michigan Institute for Computational Discovery and Engineering, and Professor of Mechanical Engineering and Mathematics
9:00 a.m. — Frederica Darema, Director, Air Force Office of Scientific Research

InfoSymbiotic Systems/DDDAS and additive manufacturing for life-cycle support of complex systems

In recent years there have been transformative changes in that we deal with more complex systems, often systems-of-systems, be they engineered and natural multi-entity systems, and including human and societal systems. Such systems present the need for methods not only for understanding, analysis, design, but also for optimized, autonomic management and decision support for their operational cycle, their interoperability with other systems and their evolution. Key underlying concept in DDDAS is the dynamic integration of instrumentation data and executing models of the system in a feedback control loop – that is on-line data are dynamically incorporated into the systems’ executing model, to improve the modeling accuracy or to speed-up the simulation, and in reverse the executing model controls the instrumentation to selectively and adaptively target the data collection process, and dynamically manage collective sets of sensors and controllers. DDDAS is timely and in-line with the advent of Large-Scale-Dynamic-Data and Large-Scale-Big-Computing. Large-Scale-Dynamic-Data encompasses the traditional Big Data with next wave of Big Data, and namely dynamic data arising from ubiquitous sensing and control in engineered, natural, and societal systems, through multitudes of heterogeneous sensors and controllers instrumenting these systems, and where the opportunities and challenges at these “large-scales” relate not only to the size of the data but the heterogeneity in data, data collection modalities, data fidelities, and timescales, ranging from real-time data to archival data. DDDAS entails the dynamic integration of the traditional high-end/mid-range parallel and distributed computing with the real-time data-acquisition and control. Thus, in tandem with the important new dimension of dynamic data, DDDAS implies Large-Scale Big Computing, an extended view of Big Computing, which includes a new dimension of computing – the collective computing by networked assemblies of multitudes of sensors and controllers. Furthermore Additive Manufacturing enables ubiquitous embedded sensing and control. The referenced capabilities DDDAS, Additive Manufacturing, Large-Scale-Dynamic-Data and Large-Scale-Big-Computing, present the opportunity for new capabilities autonomic life-cycle system support, including change of the paradigm in test and evaluation.

Bio: Dr. Frederica Darema is the SES Director of the Air Force Office of Scientific Research. Prior that she was the SES Director of the Mathematics, Information and Life Sciences Directorate at AFOSR, and she also spearheaded the Dynamic Data Driven Applications Systems (DDDAS) Program. Prior to AFOSR, she held executive level positions at NSF, as Senior Science and Technology Advisor, and Senior Science Analyst, in the Computer and Information Science and Engineering Directorate at NSF. Dr. Darema received her BS degree from the School of Physics and Mathematics of the University of Athens – Greece, and MS and Ph. D. degrees in Theoretical Nuclear Physics from the Illinois Institute of Technology and the University of California at Davis, respectively, where she attended as a Fulbright Scholar and a Distinguished Scholar. After Physics Research Associate positions at the University of Pittsburgh and Brookhaven National Lab, she received an APS Industrial Fellowship and became a Technical Staff Member in the Nuclear Sciences Department at Schlumberger-Doll Research. Subsequently, she joined the IBM T. J. Watson Research Center as a Research Staff Member in the Computer Sciences Department, and later-on she established a multidisciplinary research group on parallel applications and became the Research Manager of that group. While at IBM she also served in the IBM Corporate Technical Strategy Group, examining and helping to set corporate-wide strategies. Dr. Darema’s interests and technical contributions span the development of parallel applications, parallel algorithms, programming models, environments, and performance methods and tools for the design of applications and of software for parallel and distributed systems. Ideas she has promoted and scientific directions she has spearheaded are key for the Internet of Things and Autonomic Systems capabilities. In her career Dr. Darema has developed initiatives and programs that are recognized as having “changed the landscape of Computer Science research”; such initiatives include: the Next Generation Systems Program on novel research directions in systems software, and the DDDAS paradigm which has been characterized as “visionary” and “revolutionary”. She has also led initiatives on research at the interface of neurobiology and computing, and other across-NSF and cross-agency initiatives and programs, such as those on: Information Technology Research; Nanotechnology Science and Engineering; Scalable Enterprise Systems; and Sensors. During 1996–1998, she completed a two-year assignment at DARPA where she initiated a new thrust for research on methods and technology for performance engineered systems. Dr. Darema was elected IEEE Fellow for proposing the SPMD (Single-Program-Multiple-Data) computational model that has become the predominant model for programming high-performance parallel and distributed computers. Dr. Darema is also the recipient of the IEEE Technical Achievement Award, for her work in pioneering DDDAS. Dr. Darema has given numerous keynotes and other invited presentations in professional forums.

9:45 a.m. — Laura Balzano, U-M Department of Electrical Engineering and Computer Science

Learning low-rank models with missing data

Low-dimensional linear subspace approximations to high-dimensional data are powerful enough to capture a great deal of structure in many signals, and yet they also offer simplicity and ease of analysis. Because of this they have provided a powerful tool to many areas of engineering and science: problems of estimation, detection and prediction, with applications such as network monitoring, collaborative filtering, object tracking in computer vision, and environmental sensing. We focus on this problem with two constraints: First, our data are streaming, and second, our data may be highly corrupted. Corrupt and missing data are the norm in many massive datasets, not only because of errors and failures in data collection, but because it may be impossible to collect and process all the desired measurements. In this talk, I will describe results and demonstrate algorithms for estimating subspace projections from streaming and incomplete data. The family of algorithms is based on GROUSE (Grassmannian Rank-One Update Subspace Estimation), a subspace tracking algorithm that performs incremental gradient descent on the Grassmannian (the manifold of all d-dimensional subspaces for a fixed d). We’ll see the application to two problems in computer vision: realtime separation of background and foreground in video and realtime structure from motion.

Bio: Laura Balzano is an assistant professor in Electrical Engineering and Computer Science at the University of Michigan. She is an Intel Early Career Faculty Honor Fellow and received an NSF BRIGE award. She received all her degrees in Electrical Engineering: BS from Rice University, MS from the University of California in Los Angeles, and PhD from the University of Wisconsin. She received the Outstanding MS Degree of the year award from the UCLA EE Department, and the Best Dissertation award from the University of Wisconsin ECE Department. Her main research focus is on modeling with highly incomplete or corrupted data, and its applications in networks, environmental monitoring, and computer vision. Her expertise is in statistical signal processing, matrix factorization, and optimization.

10:15 a.m. — George Karniadakis, Professor of Applied Mathematics, Brown University

From solving PDEs to machine learning PDEs: An Odyssey in Computational Mathematics

In the last 30 years I have pursued the numerical solution of partial differential equations (PDEs) using spectral and spectral elements methods for diverse applications, starting from deterministic PDEs in complex geometries, to stochastic PDEs for uncertainty quantification, and to fractional PDEs that describe non-local behavior in disordered media and viscoelastic materials. More recently, I have been working on solving PDEs in a fundamentallly different way. I will present a new paradigm in solving linear and nonlinear PDEs from noisy measurements without the use of the classical numerical discretization. Instead, we infer the solution of PDEs from noisy data, which can represent measurements of variable fidelity. The key idea is to encode the structure of the PDE into prior distributions and train Bayesian nonparametric regression models on available noisy data. The resulting posterior distributions can be used to predict the PDE solution with quantified uncertainty, efficiently identify extrema via Bayesian optimization, and acquire new data via active learning. Moreover, I will present how we can use this new framework to learn PDEs from noisy measurements of the solution and the forcing terms.

Bio: George Karniadakis received his S.M. and Ph.D. from Massachusetts Institute of Technology. He was appointed Lecturer in the Department of Mechanical Engineering at MIT in 1987 and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continues to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is a Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received the Ralf E Kleinman award from SIAM (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His h-index is 79 and he has been cited over 32,500 times.
(Information from http://www.cfm.brown.edu/faculty/gk/)

11:00 a.m. — Break
11:15 a.m. — Karen Willcox, Professor of Aerospace and Aeronautics, Massachusetts Institute of Technology, Co-Director of MIT Center for Computational Engineering

karen-willcox-croppedData to decisions for the next generation of complex engineered systems

The next generation of complex engineered systems will be endowed with sensors and computing capabilities that enable new design concepts and new modes of decision-making. For example, new sensing capabilities on aircraft will be exploited to assimilate data on system state, make inferences about system health, and issue predictions on future vehicle behavior—with quantified uncertainties—to support critical operational decisions. However, data alone is not sufficient to support this kind of decision-making; our approaches must exploit the synergies of physics-based predictive modeling and dynamic data. This talk describes our recent work in adaptive and multifidelity methods for optimization under uncertainty of large-scale problems in engineering design. We combine traditional projection-based model reduction methods with machine learning methods, to create data-driven adaptive reduced models. We develop multifidelity formulations to exploit a rich set of information sources, using cheap approximate models as much as possible while maintaining the quality of higher-fidelity estimates and associated guarantees of convergence.

Bio: Karen E. Willcox is Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. She is also Co-Director of the MIT Center for Computational Engineering and formerly the Associate Head of the MIT Department of Aeronautics and Astronautics.Willcox holds a Bachelor of Engineering Degree from the University of Auckland, New Zealand, and masters and PhD degrees from MIT. Prior to MIT, she worked at Boeing Phantom Works with the Blended-Wing-Body aircraft design group. Her research at MIT has produced scalable methods for model reduction and new multi-fidelity formulations for design under uncertainty, which are widely applied in aircraft system design and environmental policy decision-making. Willcox is currently Co-director of the Department of Energy DiaMonD Multifaceted Mathematics Capability Center on Mathematics at the Interfaces of Data, Models, and Decisions, and she leads an Air Force MURI on optimal design of multi-physics systems. She has co-authored more than 60 papers in peer-reviewed journals and advised more than 40 graduate students, including 16 PhD students. Willcox is an Associate Fellow of the AIAA and member of SIAM, ASEE and ASME, serving in multiple leadership positions within AIAA and SIAM. In addition to her research pursuits, Willcox is active in education innovation as co-Chair of the MIT Online Education Policy Initiative and Chair of the MIT OpenCourseWare Faculty Advisory Board. She is a recognized innovator in the U.S. education landscape, where she is a 2015 recipient of the First in the World Department of Education grant.

(Information from http://kiwi.mit.edu)

12:00 p.m. — Lunch and Poster Session

Lunch time happenings

Students and post-docs will be available to talk to you about their posters from 12:30 – 2:00 p.m.

The projects from the 2017 NVIDIA Visualization Challenge will be on display.

MICDE is part of U-M Advanced Research Computing (ARC). Our three partner units, Advanced Research Computing – Technological Services (ARC-TS), Consulting for Statistics, Computing and Analytics Research (CSCAR), and the Michigan Institute for Data Science (MIDAS) will be on site during lunch to answer any questions you have about ARC resources, services and research initiatives.

The U-M 3D Lab will bring a 3D demo and answer questions about the resources they have available to researchers at U-M.

Please RSVP if you are planning on attending lunch.

2:00 p.m. — Jacqueline H. Chen, Distinguished Member of Technical Staff at the Combustion Research Facility, Sandia National Laboratories

Transforming Combustion Science and Technology Through Exascale Simulation

Exascale computing will enable combustion simulations in parameter regimes relevant to next-generation combustion devices burning alternative fuels.  High fidelity combustion simulations are needed to provide the underlying science base required to develop vastly more accurate predictive combustion models used ultimately to design fuel efficient, clean burning vehicles, planes, and power plants for electricity generation. However, making the transition to exascale poses a number of algorithmic, software and technological challenges due to power constraints and the massive concurrency expected at the exascale. Addressing issues of data movement, power consumption, memory capacity, interconnection bandwidth, programmability, and scaling through combustion co-design as part of the DOE Exascale Computing Project are critical to ensure that future combustion simulations can take advantage of emerging computer architectures in the 2023 timeframe. Co-design refers to a computer system design process where combustion science requirements influence architecture design and constraints inform the formulation and design of algorithms and software.  The current state of petascale turbulent combustion simulation will be reviewed followed by a discussion of programming models for heterogeneous, hierarchical machines with inherent variability.  While bulk synchronous programming and data parallelism have been operative at the petascale, the movement to exascale requires a shift towards asynchronous programming, where to extract maximum parallelism, both data and task parallelism accessing disjoint sets of fields is required. An example from a recent refactorization of a combustion direct numerical simulation (DNS) code, S3D, using an asynchronous model, Legion, with dynamic runtime analysis at scale on Titan at ORNL will be presented.  Further, using Legion, the overall computational and data intensive workflow is demonstrated to be readily extensibility.  Results from in situ analytics for statistics and anomaly detection using the eigen-solution of the reaction rate Jacobian will be presented.

Bio: Jacqueline H. Chen is a Distinguished Member of Technical Staff at the Combustion Research Facility at Sandia National Laboratories.  She has contributed broadly to research in petascale direct numerical simulations (DNS) of turbulent combustion focusing on fundamental turbulence-chemistry interactions. These benchmark simulations  provide fundamental insight into combustion processes and are used by the combustion modeling community to develop and validate turbulent combustion models for engineering CFD simulations.  In collaboration with computer scientists and applied mathematicians she is the founding Director of the Center for Exascale Simulation of Combustion in Turbulence (ExaCT).  She leads an interdisciplinary team to co-design DNS algorithms, domain-specific programming environments,  scientific data management and in situ uncertainty quantification and analytics, and architectural simulation and modeling with combustion proxy and production applications.  She received the DOE INCITE Award in 2005, 2007, 2008-2014, the Asian American Engineer of the Year Award in 2009, and the Sandia OE Adams Award in 2012.  She is a member of the DOE Advanced Scientific Computing Research Advisory Committee (ASCAC) and Subcommittees on Exascale Computing, and Synergies of Big Data and Exascale.  She is the editor of Flow, Turbulence and Combustion, the co-editor of the Proceedings of the Combustion Institute, volumes 29 and 30, and is a member of the Board of Directors of the Combustion Institute.

(Information from http://crf.sandia.gov/combustion-research-facility/working-with-the-crf/crf-staff-2/jacqueline-chen )

2:45 p.m. — Emanuel Gull, U-M Department of Physics

Numerical Methods for the Many-Electron Problem

Electrons in solids follow the laws of quantum mechanics. In many systems their subtle quantum interplay causes unusual but exciting and technically relevant ‘strong correlation’ effects, including magnetism, superconductivity, or charge order. These effects are difficult to describe in an unbiased way using traditional methods of solid state theory, but advances in the field of numerical methods for the quantum many-body problem have opened new approaches. Large compute clusters and supercomputers play a crucial role in the solution of the resulting equations. We will introduce some of these methods and show how they are useful in describing electron correlation physics, outline computational and theoretical challenges and illustrate the need for theory and algorithm development.

Bio: Professor Gull works in the general area of computational condensed matter physics with a focus on the study of correlated electronic systems in and out of equilibrium. He is an expert on Monte Carlo methods for quantum systems and one of the developers of the diagrammatic ‘continuous-time’ quantum Monte Carlo methods. Professor Gull has received the DOE Early Career Award and the Nevill Mott SCES prize in 2013, as well as was named a Sloan Fellow in 2014. His recent work includes the study of the Hubbard model using large cluster dynamical mean field methods, the development of vertex function methods for optical (Raman and optical conductivity) probes, and the development of bold line diagrammatic algorithms for quantum impurities out of equilibrium. Professor Gull is involved in the development of open source computer programs for strongly correlated systems.

3:15 p.m. — Break
3:30 p.m. — J. Tinsley Oden, Director of the Institute for Computational Engineering and Sciences, A.V.P. for Research, University of Texas at Austin

Oden_270_2Scientific Predictions in the Presence of Uncertainties: Applications to the Prediction of Tumor Growth

The terms “predictive science” or “science-based predictions” or “predictive computational science” have arisen in contemporary literature as descriptions of the scientific disciplines concerned with the predictability of mathematical and computational models of events that occur in the physical universe while addressing uncertainties in all factors that determine the reliability of the prediction. This lecture surveys the foundations of predictive computational science and presents applications to the prediction and treatment of cancer. It reviews Bayesian methods, maximum entropy methods, and information theory, as providing a framework for quantifying uncertainties in observational data, model selection, model parameters, model validation, and in predictions of quantities of interest. These ideas are pulled together in OPAL- the Occam Plausibility Algorithm, which is designed to provide a systematic approach to model selection, validation, and UQ.
The principal application of the methods discussed is the prediction of avascular tumor growth in living tissue. A general class of parametric models is derived using a combination of continuum mixture theory and well-established principles of cancer biology. Examples of computer predictions of tumor growth in marine laboratory animals infected with glioma are given. The effects of X-ray radiation on the decline of tumor mass are discussed. The OPAL algorithm is implemented as a means to select valid models of tumor growth among large sets of parametric models. Monte Carlo methods are employed to solve the stochastic forward problem.

Bio: Tinsley Oden is the Associate Vice President for Research, the Director of the Institute for Computational Engineering and Sciences, the Cockrell Family Regents’ Chair in Engineering #2, the Peter O’Donnell, Jr. Centennial Chair in Computing Systems, a Professor of Aerospace Engineering and Engineering Mechanics and a Professor of Mathematics at The University of Texas at Austin. He has published more than 700 technical articles and reports, and authored or edited 50 books. Oden has been listed as an ISI Highly Cited Author in Engineering by the ISI Web of Knowledge, Thomson Scientific Company.Dr. Oden’s research focuses on contemporary topics in computational engineering and mathematics, including a posteriori error estimation, model adaptivity, multi-scale modeling, verification and validation of computer simulations, uncertainty quantification and adaptive control. Dr. Oden is a member of the National Academy of Engineering and the American Academy of Arts and Sciences and is the recipient of many national and international awards, including 10 medals, six honorary doctorates, the Chevalier de l’ordre des Palmes Académiques from the French government, and the recipient of the 2013 Honda Prize for his role in establishing the field of computational mechanics. He has been on the Cockrell School of Engineering faculty since 1973.

4:30 p.m. — Announcement of Winning Posters, Winner of SC2 Visualization Challenge and Closing Remarks