Hands-on introduction and building SciFM and SciLLMs material: https://github.com/ramanathanlab/SciFM24-Tutorial/tree/main

Scientific Foundation Models

MICDE Conference April 2nd & 3rd, 2024

Rackham Amphitheatre, 915 E Washington St, Ann Arbor, MI

The 2024 MICDE Conference marks a significant moment in the evolution of scientific inquiry and exploration. This conference will focus on scientific foundation models (SciFM) that aim to have a similar transformative impact on science as Generative AI has had on natural language. SciFM are parameterized physical theories that are usually trained on a broad range of scientific data and capable of being applied to a range of downstream tasks, such as discovering patterns and generating scientific hypotheses, insights, and engineering designs.

This event is the first of its kind, dedicated exclusively to this exciting and nascent field. By congregating the world’s foremost experts in the field, the conference aims to significantly broaden the horizons of scientific foundation models and Generative AI (including LLMs) for science.

Organizer:

Karthik Duraisamy, Director, MICDE

Program Committee:

Rada Mihalcea, Director, AI Lab

Venkat Viswanathan, Assoc. Professor, Aerospace Engineering

Vancho Kocevski, Managing Director, MICDE

MICDE Magazine

Special issue dedicated to the MICDE Conference on scientific foundation models (SciFMs).

Conference Recordings

Opening Remarks & Conference Goals, Karthik Duraisamy (UM)

Foundation Models for Enabling Rational Design of Biological Systems, Arvind Ramanathan (ANL)

 Scaling up Materials Foundation Models, Venkat Viswanathan (UM)

Charting the course for SciFMs: Themes & Questions, Karthik Duraisamy (UM)

Foundational Methods for Foundation Models for Scientific Machine Learning, Michael W. Mahoney (UC Berkeley, ICSI, LBNL)

Scaling Laws of Formal Reasoning, Sean Welleck (CMU)

Next-generation Frameworks for AI and Science, and Trust, Payel Das (IBM)

Panel Discussion: Big Questions for SciFM

 Trillion Parameter Foundation Models as Discovery Accelerators, Ian T. Foster (ANL, UChicago)

Neural Operators: AI Accelerating Scientific Understanding, Animashree Anandkumar (Caltech)

Fundamental Physics and Foundational Models for the Forecasting and Optimization of Complex Systems, Petros Koumoutsakos (Harvard)

University Laboratory Partnerships in the AI Era, Jason Pruet (LANL)

Panel Discussion: Funding and Venture Ecosystem

Poster Competition & Closing Remarks

Program

April 2

08:30 am : Opening Remarks & Conference Goals: Karthik Duraisamy

09:00 am : Applications of LLMs in Science:

09:00 am : Joint Language and Molecule Representation for Drug Discovery, Heng Ji (UIUC)

Talk title: Joint Language and Molecule Representation for Drug Discovery

Abstract: There are approximately 166 billion small molecules, and 970 million of them are druglike. However, there are now only 89 approved tyrosine kinase inhibitors in various healthcare systems worldwide.
So the medicine domain urgently needs AI’s help. However, existing large language models (LLMs) alone cannot solve the problem because LLMs, by design, make up (hallucinate) some false claims in a confident tone. On the other hand, existing knowledge bases cannot solve the problem either: 0 of the 89 kinase inhibitors are in the popular human-constructed databases. The main reason is that chemistry language is very different from natural language because it requires domain knowledge, multimodal information, and complex discourse structure to understand. Using drug discovery as a case study, I will present our recent efforts in addressing these challenges and present some preliminary results on the drug variants proposed by AI algorithms.

Bio: Heng Ji is a professor at the Computer Science Department and an affiliated faculty member at the Electrical and Computer Engineering Department and Coordinated Science Laboratory of the University of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Founding Director of the Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M.A. in Computational Linguistics from Tsinghua University and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge-enhanced Large Language Models, Knowledge-driven Generation, and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates Association Forum and selected to participate in DARPA AI Forward in 2023. She was selected as a “Young Scientist” and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. She was named as part of Women Leaders of Conversational AI (Class of 2023) by Project Voice. The other awards she received include “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013, the NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, “Best of ICDM2013” paper award, “Best of SDM2013” paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited to testify to the U.S. House Cybersecurity, Data Analytics, & IT Committee as an AI expert in 2023. She was invited by the Secretary of the U.S. Air Force and AFRL to join the Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030 and invited to speak at the Federal Information Integrity R&D Interagency Working Group (IIRD IWG) briefing in 2023. She has coordinated the NIST TAC Knowledge Base Population task since 2010. She was the associate editor for IEEE/ACM Transaction on Audio, Speech, and Language Processing and served as the Program Committee Co-Chair of many conferences, including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected to the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023.

09:25 am : Foundation Models for Enabling Rational Design of Biological Systems, Arvind Ramanathan (ANL)

Talk title: Foundation Models for Enabling Rational Design of Biological Systems

Abstract:

Bio: Arvind Ramanathan is a computational biologist in the Data Science and Learning Division at Argonne National Laboratory and a senior scientist at the University of Chicago Consortium for Advanced Science and Engineering (CASE). His research interests are at the intersection of data science, high-performance computing, and biological/biomedical sciences.

His research focuses on three areas focusing on scalable statistical inference techniques: (1) for analysis and development of adaptive multi-scale molecular simulations for studying complex biological phenomena (such as how intrinsically disordered proteins self-assemble or how small molecules modulate disordered protein ensembles), (2) to integrate complex data for public health dynamics, and (3) for guiding design of CRISPR-Cas9 probes to modify microbial function(s).

He has published over 30 papers, and his work has been highlighted in the popular media, including NPR and NBC News. He obtained his Ph.D. in computational biology from Carnegie Mellon University and was the team lead for the integrative systems biology team within the Computational Science, Engineering and Division at Oak Ridge National Laboratory.

09:50 am : Scaling up Materials Foundation Models, Venkat Viswanathan (UM)

Talk title: Scaling up Materials Foundation Models

Abstract: Text-based representations for molecules and materials has a rich history tracing back to the seminal SMILES representation introduced in 1987.  In this talk, we will discuss our work scaling materials foundation models for molecules and crystals, show neural scaling laws, and present results on our training of 2.2B model on a single Cerebras CS-2 wafer.  We will conclude by discussing the outlook for the future of generative materials modeling.

Bio: Venkat Viswanathan is an Associate Professor of Aerospace and Mechanical Engineering at the University of Michigan. He is a recipient of numerous awards including the MIT Technology Review Innovators Under 35, Office of Naval Research (ONR) Young Investigator Award, Alfred P. Sloan Research Fellowship in Chemistry and National Science Foundation CAREER award.

10:15 am : Break

10:30 am : Hands-on introduction to Scientific LLMs [TPC]

Tutorial material: https://github.com/ramanathanlab/SciFM24-Tutorial/tree/main

11:30 am : Lunch [Michigan League: Michigan and Kalamazoo Rooms] (download map)

01:00 pm : Building SciFMs & SciLLMs:

Track 1: LLM Tools for Science [TPC] at Rackham Amphitheater Track 2: LLM Tools for Mathematics [Sean Welleck] at East Conference Room
 Retrieval augmented generation  Open Language for Mathematics
 Agent-based systems  Neural Theorem Proving
 Building toolchains and surrogates
 Genome Scale language models
 Scaling experiments on Supercomputers

03:00 pm : Break

03:15 – 05:00 pm : Session 1: Office hours for Advanced users [TPC]                          Session 2: Deploying GenAI models with Nvidia

Inference Microservices [NVIDIA]

April 3

08:00 am : Breakfast

08:30 am : Charting the course for SciFMs: Themes & Questions, K. Duraisamy (UM)

09:00 am : Emerging Paradigms in SciFM:

09:00 am : Foundational Methods for Foundation Models for Scientific Machine Learning, Michael W. Mahoney (UC Berkeley, ICSI, LBNL)

Talk title: Foundational Methods for Foundation Models for Scientific Machine Learning

Abstract: The remarkable successes of ChatGPT in natural language processing (NLP) and related developments in computer vision (CV) motivate the question of what foundation models would look like and what new advances they would enable, when built on the rich, diverse, multimodal data that are available from large-scale experimental and simulational data in scientific computing (SC), broadly defined. Such models could provide a robust and principled foundation for scientific machine learning (SciML), going well beyond simply using ML tools developed for internet and social media applications to help solve future scientific problems. I will describe recent work demonstrating the potential of the “pre-train and fine-tune” paradigm, widely-used in CV and NLP, for SciML problems, demonstrating a clear path towards building SciML foundation models; as well as recent work highlighting multiple “failure modes” that arise when trying to interface data-driven ML methodologies with domain-driven SC methodologies, demonstrating clear obstacles to traversing that path successfully. I will also describe initial work on developing novel methods to address several of these challenges, as well as their implementations at scale, a general solution to which will be needed to build robust and reliable SciML models consisting of millions or billions or trillions of parameters.

Bio: Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar and a faculty scientist at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics-informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his Ph.D. from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he is on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council’s Committee on the Analysis of Massive Data, he co-organized the Simons Institute’s fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute’s 2016 PCMI Summer Session on The Mathematics of Data, and he runs the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets. He is the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley.

 

09:40 am : Scaling Laws of Formal Reasoning, Sean Welleck (CMU)

Talk title: Scaling Laws of Formal Reasoning

Abstract: Improving Large Language Models’ (LLMs) formal reasoning abilities is critical for their use in scientific domains. We explore two related research directions. First, we introduce Llemma, a foundation model specifically designed for mathematics. Llemma leverages Proofpile II, a 52B token mathematical corpus, to improve the relationship between training compute and reasoning ability (e.g., leading to a 13% accuracy improvement on MATH). Second, we discuss “easy-to-hard” generalization, where a model generalizes to problems harder than those present in the training data. Building on the premise that evaluation is often easier than generalization, we explore training strong evaluator models to facilitate easy-to-hard generalization. Moving forward, our work highlights the value of scaling high quality data collection and further algorithmic development to improve formal reasoning in LLMs.

Bio: Sean Welleck is an Assistant Professor at Carnegie Mellon University. He leads the Machine Learning, Language, and Logic (L3) Lab at CMU, with interests that include algorithms for large language models and AI for mathematics. His research includes an Outstanding Paper Award at NeurIPS 2021 and a Best Paper Award at NAACL 2022. Previously, he obtained B.S.E. and M.S.E. degrees at the University of Pennsylvania, a Ph.D. at New York University, and was a Postdoctoral Scholar at the University of Washington.

10:10 am : Next-generation Frameworks for AI and Science, and Trust, Payel Das (IBM)

Talk title: Next-generation Frameworks for AI and Science, and Trust

Abstract: The convergence of generative AI and large-scale AI modeling technologies associated with science appears to lower technical and knowledge barriers and increase the number of actors with certain capabilities. These capabilities have potential for beneficial uses while at the same time raising certain biosafety and biosecurity concerns. In this talk, I will present our recent work on advancing foundational model architectures, their emerging properties such as memorization and generalization, as well as possible control mechanisms for ensuring safety and trust. I will also discuss demonstrations of such technologies on scientific applications.

Bio: Payel Das is a principal research staff member and a manager at IBM Research Artificial Intelligence (AI), IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10597 USA. She leads trustworthy generative AI and next-generation AI architecture research at IBM. She is currently serving as an advisory board member in the Applied Mathematics and Statistics Department at Stony Brook University and is an IBM Master Inventor. Das received a Ph.D. from Rice University. She has co-authored over 80 peer-reviewed publications and 40+ patent disclosures and given dozens of invited talks. Her research interests include statistical physics, trustworthy machine learning, neuro- and physics-inspired AI, and machine creativity. Das is the recipient of several IBM Outstanding Technical Achievement Awards (the highest technical award at IBM), IBM Special Division Award, two IBM Research Division Awards, one IBM Eminence and Excellence Award, five IBM Invention Achievement Awards, and several US and EU-based government funding awards. Her work has been also recognized by Harvard Belfer Center’s Technology and Public Purpose (TAPP) Project. She is a senior Member of IEEE. 

10:40 am: Coffee Break

11:00 am : Panel Discussion: Big Questions for SciFM
  • Ian T. Foster, Argonne National Laboratory, University of Chicago
  • Alfred Hero, National Science Foundation
  • Jonathan Carter, Lawrence Berkeley National Laboratory
  • Petros Koumoutsakos, Harvard University
  • Payel Das, IBM T. J. Watson Research Center
  • Heng Ji, University of Illinois at Urbana Champaign
  • Michael W. Mahoney, UC Berkeley, Lawrence Berkeley National Laboratory, ICSI

12:00 pm : Lunch, and Contributed posters

01:30 pm : Frontiers of SciFM:

01:30 pm : Trillion Parameter Foundation Models as Discovery Accelerators, Ian T. Foster (ANL, UChicago)

Talk title: Trillion Parameter Foundation Models as Discovery Accelerators

Abstract: I propose, and present some evidence in support of, the following hypotheses: Accelerated discovery will require advanced AI methods for such purposes as harnessing and navigating vast knowledge, automating hypothesis generation, and operating ever-more complex tools. Foundation models are a promising path forward, but we need models that know more about science than those available to us today. The enormous cost of building FMs means that we can only build a small number for science; thus, we need extensive collaboration. Making effective use of the resulting FMs will require rethinking our research infrastructure in order for human scientists, AI, HPC, and robotics to work together effectively.

Bio:Ian Foster is Senior Scientist and Distinguished Fellow, and director of the Data Science and Learning Division, at Argonne National Laboratory, and the Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago. He has a BSc degree from the University of Canterbury, New Zealand, and a PhD from Imperial College, United Kingdom, both in computer science. His research is in distributed, parallel, and data-intensive computing technologies, and their applications to scientific problems. He is a fellow of the AAAS, ACM, BCS, and IEEE, and has received the BCS Lovelace Medal; IEEE Babbage, Goode, and Kanai awards; and ACM/IEEE Ken Kennedy award.

02:05 pm : Neural Operators: AI Accelerating Scientific Understanding, Animashree Anandkumar (Caltech)

Talk title: Neural Operators: AI Accelerating Scientific Understanding

Abstract: While language models have impressive capabilities of text understanding, they lack the physical understanding and grounding needed in scientific domains. For instance, language models could suggest new hypotheses, such as new molecules or designs, but they lack physical validity and the ability to simulate the processes internally. Hence, the proposed hypotheses still require physical experimentation for validation, which is the biggest bottleneck of scientific research. Numerical simulations offer an alternative to physical experiments, but traditional methods are too slow and infeasible for complex processes observed in many scientific domains. We propose AI-based simulation methods that are 4-5 orders of magnitude faster and cheaper than traditional simulations. They are based on Neural Operators that learn mappings between function spaces and have been successfully applied to weather forecasting, fluid dynamics, carbon capture and storage modeling, and optimized design of medical devices, yielding significant speedups and improvements.

Bio: Animashree (Anima) Anandkumar is a Bren Professor of Computing and Mathematical Sciences at the California Institute of Technology. Her research interests are in the areas of large-scale machine learning, non-convex optimization, and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms for machine learning. Tensor decomposition methods are embarrassingly parallel and scalable to enormous datasets. They are guaranteed to converge to the global optimum and yield consistent estimates for many probabilistic models, such as topic models, community models, and hidden Markov models. More generally, Professor Anandkumar has been investigating efficient techniques to speed up non-convex optimization, such as escaping saddle points efficiently.

 

02:40 pm : Fundamental Physics and Foundational Models for the Forecasting and Optimization of Complex Systems, Petros Koumoutsakos (Harvard)

Talk title: Fundamental Physics and Foundational Models for the Forecasting and Optimization of Complex Systems

Abstract: 

Bio: Petros Koumoutsakos is Herbert S. Winokur, Jr. Professor of Engineering and Applied Sciences, Faculty Director of the Institute for Applied Computational Science (IACS), and Area Chair of Applied Mathematics at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). He studied Naval Architecture (Diploma-NTU of Athens, M.Eng.-U. of Michigan), Aeronautics, and Applied Mathematics (PhD-Caltech). He has conducted post-doctoral studies at the Center for Parallel Computing at Caltech and at the Center for Turbulent Research at Stanford University and NASA Ames. He has served as the Chair of Computational Science at ETHZ Zurich (1997-2020) and has held visiting fellow positions at Caltech, the University of Tokyo, MIT, the Radcliffe Institute of Advanced Study at Harvard University, and he is a Distinguished Affiliated Professor at TU Munich.

Petros is an elected Fellow of the American Society of Mechanical Engineers (ASME), the American Physical Society (APS), the Society of Industrial and Applied Mathematics (SIAM), and the Collegium Helveticum. He is the recipient of the European Research Council’s Advanced Investigator Award and the ACM Gordon Bell Prize in Supercomputing. He is also an elected International Member of the US National Academy of Engineering (NAE).

His research interests are on the fundamentals and applications of computing and artificial intelligence to understand, predict, and optimize fluid flows in engineering, nanotechnology, and medicine.

03:15 pm : Coffee Break

03:30 pm : University Laboratory Partnerships in the AI Era, Jason Pruet (LANL)

Talk title: University Laboratory Partnerships in the AI Era

Abstract: 

Bio: After completing his graduate studies in physics at UCSD in 2001, Jason spent a decade working at LLNL on national security challenges. He then took a position with the Department of Energy. There he served in NNSA as Director of the Office of Engineering, Stockpile Assessments, and Responsiveness, in the Office of Intelligence and Counterintelligence as Chief of the Nuclear Devices branch, and in other roles. Since 2018 Jason has been at the Los Alamos National Laboratory. He is presently the Director of the laboratory’s National Security AI Office.

04:00 pm : Panel Discussion: Funding and Venture Ecosystem
  • Jean-Luc Cambier, Office of the Secretary of Defense
  • Jason Pruet, Los Alamos National Laboratory
  • Alvaro Velasquez, DARPA Information Innovation Office
  • Jonathan Carter, Lawrence Berkeley National Laboratory
  • Rajesh Swaminathan, Khosla Ventures
  • Alfred Hero, National Science Foundation
  • John Wei, Applied Ventures

05:00 pm : Award Ceremony and Closing Remarks

05:30 pm : Adjourn

Invited Speakers & Panelists

Jason Pruet, Director of National Security AI, Los Alamos National Laboratory

Ian T. Foster, Director of Data Science and Learning Division, Argonne National Laboratory

Animashree Anandkumar, Bren Professor of Computing and Mathematical Sciences, California Institute of Technology

John Wei, Ph.D., MBA, Investment Director, Applied Ventures, LLC

Heng Ji, Professor of Computer Science, University of Illinois at Urbana Champaign

Petros Koumoutsakos, Herbert S. Winokur, Jr. Professor of Computing in Science and Engineering, Harvard University

Venkat Viswanathan, Associate Professor of Aerospace Engineering, University of Michigan

Portrait of Sean Welleck

Sean Welleck, Assistant Professor of Computer Science at Carnegie Mellon University

Alvaro Velasquez, Program Manager, DARPA Information Innovation Office (I2O)

Arvind Ramanathan, Computational Biologist in the Data Science and Learning Division, Argonne National Laboratory & Senior Scientist, University of Chicago Consortium for Advanced Science and Engineering

Michael Mahoney, Professor of Statistics, UC Berkeley & Leader of the Machine Learning and Analytics Group, Lawrence Berkeley National Laboratory & Vice President and Director of Big Data Group, International Computer Science Institute

Payel Das, Research Staff Member and Manager, AI Science, IBM T. J. Watson Research Center

Jonathan Carter, Associate Laboratory Director, Computing Sciences, Lawrence Berkeley National Laboratory

Rajesh Swaminathan, Partner, Khosla Ventures

Portrait of Jean-Luc Cambier

Jean-Luc Cambier, Director of Research Programs, Office of the Secretary of Defense.

Portrait of Alfred Hero

Alfred Hero, Program Director, Computing and Communication Foundations, National Science Foundation

Panel Discussions

    Big Questions for SciFM:

      • I. T. Foster (ANL, UChigago),
      • A. Hero (NSF),
      • J. Carter (LBNL),
      • P. Koumoutsakos, (Harvard),
      • P. Das, (IBM),
      • H. Ji, (UIUC),
      • M. W. Mahoney (UC Berkeley, LBNL, ICSI)

     

    Funding and Venture Ecosystem:

    • J-L Cambier (OUSD),
    • J. Pruet (LANL),
    • A. Velasquez (DARPA),
    • J. Carter (LBNL), R
    • R. Swaminathan (Khosla),
    • A. Hero (NSF),
    • J. Wei (Applied Ventures)

    Tutorials and Hackathon

      The tutorials are designed to introduce attendees on Generative AI & Large Language Models (LLMs) for science and mathematics. This will be co-organized with Trillion Parameter Consortium (TPC), Prof. Sean Welleck (CMU), and NVIDIA.

      The tutorials and hackathon will be divided into three sessions:

      • Hands-on Introduction to Scientific LLMs [TPC]: Tutorial on the basics of LLMs and Generative AI with a focus on science and engineering.
      • Building SciFMs & SciLLMs: Focusing on advanced SciFM and LLM concepts. It will run on two tracks:
      Track 1: LLM Tools for Science [TPC] at Rackham Amphitheatre
      • Retrieval augmented generation
      • Agent-based systems
      • Building toolchains and surrogates
      • Genome Scale language models
      • Scaling experiments on Supercomputers

      Details: 

      • Brian Hsu, Priyanka Setty will talk about the use of agents and agent-based systems for experimental protocol generation. This talk will mostly cover the ability to develop a workflow that outlines “given a general description of a protocol, is it possible to generate code automatically to execute and run on a robotics system of interest?” This will be related to a simple growth curve modeling or pipetting example problem in the lab for designing novel peptides targeting antibiotic resistant bacteria.
      • Carla Mann will illustrate an example of building on top of ESM like models for protein protein interactions.
      • Archit Vasan will talk about the ability to develop surrogate models for virtual screening protocols and present a simple approach towards this using transformers.
      • Alex Brace, Kyle Hippe, Azton Wells will talk about Genome Scale language models and their applications downstream.
      • Carlo Siebenschuh: will present approaches other than transformers for scientific data with applications such as neural operators and latent diffusion models.
      • Ozan Gokdemir will illustrate retrieval augmented generation in the context of scientific literature.
      • Azton Wells, Archit Vasan will provide an overview of scaling and what it means to set up scaling experiments on supercomputing platforms.
      Track 2: LLM Tools for Mathematics [Sean Welleck] at East Conference Room
      • Open Language for Mathematics
      • Neural Theorem Proving

      Details:

      Interactive proof assistants enable the verification of mathematics and software using specialized programming languages. The emerging area of neural theorem proving integrates neural language models with interactive proof assistants. Doing so is mutually beneficial: proof assistants provide correctness guarantees on language model outputs, while language models help make proof assistants easier to use. This talk overviews two research threads in neural theorem proving, motivated by applications to mathematics and software verification. First, we discuss language models that predict the next step of a proof, centered around recent work that integrates language models into an interactive proof development tool. Second, we discuss techniques that leverageinformalmathematical data for formal theorem proving. The talk accompanies a set of interactive notebooks available on Github.

      Open Hackathon and tutorial is divided in two sessions:

      Session 1: Office hours for Advanced users [TPC] Session 2: Deploying GenAI models with Nvidia Inference Microservices [NVIDIA]

      Tutorial facilitators:

      [TPC]: Arvind Ramanathan, Staff Scientist at Argonne National Laboratory, Sean Welleck, Assistant Professor of Computer Science at Carnegie Mellon University, and the following students: Ozan Gokdemir, Priyanka Setty, Archit Vasan, Kyle Hippe, Carla M. Mann, and Azton Wells.

      [NVIDIA]: Geetika Gupta, principal product manager at NVIDIA, and Yuliana Zamora

      Poster Competition

        Students and postdocs are invited to submit a poster for the 2024 MICDE Annual Conference poster competition. For U-M students and postdocs, this can be on any topic related to computational science. For non-U-M students and postdocs, the poster should specifically relate to the theme of the conference.

         Winning Posters:

        • 1st place: MIST: Molecular Insight SMILES Transformer by Anoushka Bhutani, Alexius Wadell, Shang Zhu, Prof. Venkat Viswanathan​
        • 2nd place in the general poster category: An adaptive surrogate-based Multi-fidelity Monte Carlo scheme for reliability analysis of nonlinear systems against natural hazards​ by Liuyun Xu, Prof. Seymour Spence​
        • 2nd place in the SciFM poster category: A Reliable Knowledge Processing Framework for Combustion Science using Foundation Models by Vansh Sharma, Prof Venkat Raman​

        The poster session is sponsored by Donaldson.