The crucial role of massively parallel simulations in future space exploration missions

By | HPC, News, Research

The NASA Mars 2020 Mission was launched with the goal of seeking signs of ancient life and collecting samples of rock and regolith (broken rock and soil) for possible return to Earth. Perseverance, the mission’s rover, is testing technologies to help pave the way for future human exploration of Mars. While Perseverance was launched in the summer of 2020, landing on the martian surface on February 18, 2021, the journey started years earlier when the mission’s objectives were outlined, including realistic surface operations, a proof-of-concept instrument suite, and suggestions for threshold science measurements that would meet the proposed objectives. The success of this, as well as past and future missions, is the collective result of thousands of NASA funded projects from teams of researchers and scientists from all over the country that span many decades. University of Michigan Professor Jesse Capecelatro (Mechanical Engineering & Aerospace Engineering) is the lead of one of these projects. In 2016, his research group started working on a project aimed to develop high fidelity models of plume-Induced soil erosion during lunar and planetary landings that will be used in future missions. 

During descent, exhaust plumes fluidize surface soil and dust, forming craters and buffeting the lander with coarse, abrasive particles. The SkyCrane technology, used by the Curiosity Rover in 2012 and by Perseverance in 2021, was designed to avoid plume-surface interactions by keeping the jets far above the surface. Regardless of this feature, a wind sensor on NASA’s Mars Curiosity rover was damaged during landing. In the Perseverance’s video footage of the landing, significant erosion and high-speed ejecta were observed.  It is also not a practical option for future crewed and sample return missions. 

NASA is aiming to improve rovers’ landing gears for future missions. Computational models and simulations are a critical component to achieve this as it is not feasible to mimic the martian or other celestial bodies entry conditions and run thousands of tests in a lab anywhere on Earth. This is where Prof. Capecelatro’s research group, including doctoral candidates Greg Shallcross, and Meet Patel, and postdoctoral fellow Medhi Khalloufi, work comes in as the accurate prediction of surface-plume interactions is necessary for the overall success of future space missions. While simulations of surface-plume interactions have been conducted in the past, these are outdated, and typically relied on simplified assumptions that prevent a detailed and dynamic analysis of the fluid-particle coupling. Capecelatro’s research code will provide NASA with a framework to better predict how different rover designs would impact the landing, specifically the effects of the force of the collision on the planet’s surface, and ability to adjust the rover’s landing trajectory independent of the NASA mission control team on earth.  

Prof. Capecelatro’s research project utilizes predictive simulation tools to capture the complex multiphase dynamics associated with rocket exhaust impingement during touchdown. Even in the most powerful supercomputers, a direct solution approach is only capable of accounting for  about a thousand particles at the same time, so accurate and predictive multi-scale models of the unresolved flow physics are essential. 

Full landing site image credit: NASA/JPL-Caltech (mars.nasa.gov/resources/24762/mars-sample-return-lander-touchdown-artists-concept/); particle and intermediate scale images: Capecelatro’s Research Group

Particle Scale

The group has been developing the simulation capabilities to directly resolve the flow at the sub-particle scale to shed light on important physics under the extreme conditions relevant to particle-structure interactions. Their model uses a massively parallel compressible particle-laden flow simulation tool where the exhaust plume and its corresponding flow features are computed in an Eulerian-Lagrangian framework. At this scale, for example, the flow between individual particles are resolved, providing important insight on drag and turbulence under these extreme conditions.

Intermediate Scale

As a next step, the particle-scale results inform models used in the intermediate-scale simulations developed by the group, where particles are still tracked individually but the flow is not resolved at a sub-particle resolution, allowing them to simulate upwards of 1 billion particles. At this scale, an Eularian-Lagrangian framework is used to incorporate the ground’s particle flow with the jet’s plume. 

Full Landing Site Scale

While the intermediate-scale simulations allow to study erosion and cratering, a full landing site that contains trillions of particles is still out of reach even in the most powerful HPC clusters. After further modeling, Capecelatro’s multi-scale framework will be handed over to NASA where it will be incorporated in simulations of the full landing site. At this scale, NASA’s framework uses an Eularian-based, two fluid model that treats both fluid and particles as a continuum, informed by the particle- and middle-scales models. 

Mission Mars 2020 is expanding NASA’s robotic presence on the red planet. While it is a big step to set the stage for future human exploration, the Perseverance Rover needs further redesign to make the voyage safe for humans. Capecelatro’s physics-based models are aiding this task by helping predict and model more accurately the outcomes of a spacecraft attempting to safely land millions of miles from home. As in many other fields, computational science will continue to play a critical role in the future of humanity’s quest to conquer space. #computationalscience everywhere!

Related links:
Sticking the landing on Mars: High-powered computing aims to reduce guesswork
Capecelatro’s Research Group
NASA 2020 Mars Mission: Perseverance Rover

Summer STEM Institute (SSI) Teaching Opportunties

By | News, SC2 jobs

The Summer STEM Institute (SSI) is a virtual education program that teaches programming, data science, and research.

 

SSI is currently hiring for both part-time and full-time roles for summer 2021. Both roles offer competitive compensation.

Role 1: Part-Time Research Mentor (10-15 hours/week):

Responsibilities: Lead a virtual lab of 2-3 students; mentor students through the ideation and completion of their own computational or theoretical research projects; support students through the creation of weekly research deliverables, including a background research report, a research proposal, and a final paper and presentation

Qualifications: Passion for teaching and mentorship; graduate student, postdoctoral fellow, or (in exceptional circumstances) undergraduate with extensive programming or research experience in a computational or theoretical field; past research experiences and deliverables, including published papers and presentations

Note: Research mentors are able to work for the program alongside full- time job, internship, or research commitments.

Role 2: Full-Time Teaching Fellow (40 hours/week):

Responsibilities: Teach and work closely to support students through the data science and research bootcamp, answer student questions and discussion board posts; host office hours; leave feedback on student homework assignments

Qualifications: Passion for teaching and mentorship; experience with Python programming and data science libraries (numpy, pandas, matplotlib, sklearn); experience with data science and the research process


Interested undergraduate and graduate students are encouraged to apply. Please fill out this 2-minute interest form. If we decide to move forward with your application, we will send more information about the roles and also times to schedule an interview. If you have any questions, please reach out to hiring@summersteminstitute.org.

Argonne Training Program on Extreme-Scale Computing

By | News, SC2 jobs

The annual Argonne Training Program on Extreme-Scale Computing (ATPESC) is set to take place August 1–13, 2021. The call for applications is now open through March 1, 2021.

 

Apply now for an opportunity to learn the tools and techniques needed to carry out scientific computing research on the world’s most powerful supercomputers. ATPESC participants will be granted access to DOE’s leadership-class systems at the ALCF, OLCF, and NERSC.

The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future.

The core of the program will focus on programming methodologies that are effective across a variety of supercomputers and that are expected to be applicable to exascale systems. Additional topics to be covered include computer architectures, mathematical models and numerical algorithms, approaches to building community codes for HPC systems, and methodologies and tools relevant for Big Data applications.

PROGRAM CURRICULUM

Renowned scientists and leading HPC experts will serve as lecturers and guide the hands-on sessions. The core curriculum will cover:

  • Hardware architectures
  • Programming models and languages
  • Data-intensive computing and I/O
  • Visualization and data analysis
  • Numerical algorithms and software for extreme-scale science Performance tools and debuggers
  • Software productivity
  • Machine learning and deep learning tools and methods for science

ELIGIBILITY AND APPLICATION

Doctoral students, postdocs, and computational scientists interested in attending ATPESC can review eligibility and application details on the website.

There are no fees to participate in ATPESC. Domestic airfare, meals, and lodging are also provided. Application deadline: March 1, 2021.

Doctoral students, postdocs, and computational scientists interested in attending ATPESC can review eligibility and application details on the application instructions web page.

The event will be held in the Chicago area. If an in-person meeting is not possible, it will be held as a virtual event.

Note: There are no fees to participate. Domestic airfare, meals, and lodging are provided.

IMPORTANT DATES – ATPESC 2021

  • March 1, 2021 – Deadline to submit applications
  • April 26, 2021 – Notification of acceptance
  • May 3, 2021 – Account application deadline

For more information, contact support@extremecomputingtraining.anl.gov

Los Alamos National Laboratory, Multiple HPC Intern Summer Opportunities

By | News, SC2 jobs

For questions about internships and instructions on how to apply, please email HPCRecruits@lanl.gov


HPC Data Movement and Storage Team: Upcoming Student Project Opportunities

PROJECT: EMERGING STORAGE SYSTEM(S) EVALUATION

(Lead Mentor: Dominic Manno)

Storage systems are evolving as technology, such as flash, becomes economically viable. Vendors implementing cutting edge hardware solutions often approach LANL to help gain insight into how these systems could move into the real world (HPC applications). Work in this area includes potential modifications to filesystems, filesystem configuration/tuning, testing hardware, fixing bugs, finding bottlenecks anywhere in the stack in order to increase efficiency and make the storage system faster.

Preferred skills:

● Interest in HPC and storage systems
● Comfortable with computer hardware
● Strong analytical skills
● Benchmarking experience
● Experience with linux and scripting (bash, csh, Python, etc.)
● Comfortable with C programming

PROJECT: FILE SYSTEM(S) FEATURE AND TOOLSET EVALUATIONS

(Lead Mentor: Dominic Manno)

File systems evolve along with user requirements. New features are implemented to accommodate changing workloads and technology. LANL’s storage team must evaluate new features and their impact on HPC applications. This work will explore file system features, modifications to current build procedures/processes, and impact to LANL’s storage team metric collection tooling. Work in this area includes building source code (kernel included), configuring linux servers, configuring a basic distributed file system, benchmarking, experiment design, analysis of data, and scripting.

Preferred skills:

● Knowledge of and interest in filesystems
● Experience with Linux and Command Line Interface
● Experience with code build systems and software
● Interest in HPC and storage systems at scale
● Benchmarking experience

ABOUT THE HPC DATA MOVEMENT AND STORAGE TEAM:
The High Performance Computing (HPC) Data Storage Team provides vanguard production support, research, and development for existing and future systems that feed and unleash the power of the supercomputer. The Data Storage Team designs, builds and maintains some of the largest, fastest and most complex data movement and storage systems in the world, including systems supporting 100 Petabytes of capacity. We provide storage systems spanning the full range of tiers from the most resilient archival systems to the pinnacle of high-speed storage, including all-flash file systems and systems supplying bandwidth that exceeds a terabyte per second to some of the largest and fastest supercomputers in the world. Innovators and builders at heart, the Data Storage team seeks highly motivated, productive, inquisitive, and multi-talented candidates who are equally comfortable working independently as well as part of a team. Team member duties include: designing, building, and maintaining world-class data movement and storage systems; evaluating and testing new technology and solutions; system administration of HPC storage infrastructure in support of compute clusters; diagnosing, solving, and implementing solutions for various system operational problems; tuning file systems to increase performance and reliability of services; process automation.


HPC Platforms Team: Upcoming Student Project Opportunities

PROJECT: HPC CLUSTER REGRESSION
(Lead Mentor: Alden Stradling)

Building on work done by our interns this summer, we are continuing the process of adapting existing regression testing software to do system-level regression testing. Using the LANL- developed Pavilion2 framework in combination with Node Health Check (NHC) for more detailed information, our interns are moving the system from proof-of-concept in a virtualized test cluster to production-style systems to measure effectiveness and system performance impact, and to flesh it out as a running system. Also on the agenda is to make test creation and propagation simple, allowing regression detection to be added at the same time as fixes are made to the system.

Preferred skills

• Interest in HPC and modern infrastructure management at scale
• Problem solving and creativity
• Configuration Management
• Version Control
• Programming experience in bash, python or perl
• Strong background in UNIX and familiarity using CLI

About the HPC Platforms Team
The High Performance Computing (HPC) Platforms Team provides vanguard system and runtime support for some of the largest and fastest supercomputers in the world, including multi-petaop systems (e.g., the recently deployed 40 Peta operations per second Trinity Supercomputer). Troubleshooters and problem- solvers at heart, the HPC Platforms Team seeks highly motivated, productive, inquisitive, and multi-talented candidates who are equally comfortable working independently as well as part of a team. Team member duties include: system deployment, configuration, and full system administration of LANL’s world-class compute clusters; evaluating and testing new technology and solutions; diagnosing, solving, and implementing solutions for various system operational problems; system administration of HPC network infrastructure in support of compute clusters; diagnosing, solving, and implementing solutions for various system operational problems; system software management and maintenance, including security posture maintenance; tuning operating systems to increase performance and reliability of services; developing tools to support automation, optimization and monitoring efforts; interacting with vendors; and communicating and collaborating with other groups, teams, projects and sites.


HPC Design Group: Upcoming Student Project Opportunities

PROJECT: OPTIMIZING “SPACK CONTANERIZE” FOR USE WITH CHARLIECLOUD

(Lead Mentor: Tim Randles)

The Spack software package manager has the ability to output software build recipes as dockerfiles. These dockerfiles often require hand-editing to work well with Charliecloud. In this project you will work with the Charliecloud team at Los Alamos to identify common problems with Spack dockerfiles. You will then determine if these problems are best addressed by making changes to Charliecloud’s dockerfile support or if there are improvements that should be proposed to Spack’s containerize functionality. The intern will be expected to implement suggested changes. At the end of the summer the intern will present their work.

PROJECT: BUILDING A GITLAB TEST INFRASTRUCTURE USING THE ANSIBLE REPOSITORY

(Lead Mentor: Cory Lueninghoener)

Use Gitlab’s CI/CD pipeline and runner functionality to build an automated test infrastructure for checkins to our Git-backed Ansible repository. This would start out with getting familiar with Gitlab’s automated pipeline capabilities and running tasks on code checkin, and move on to simple linting tests that run each time a change is checked in. From there, it could move on to running larger test suites on VMs or in containers, all the way up to building and testing virtual clusters and tagging good cluster image releases.

About the HPC DES Group:
The High Performance Computing Design Group focuses on future technologies and systems related to HPC while providing technical resources when needed to the more production focused HPC Groups. Areas of focus include I/O and storage, future HPC architectures, system management, hardware accelerators, and reliability and resiliency. Production timescales of projects vary from weeks in the future for production deployments to 10 years or more for some of the reliability and future architecture work.


Where You Will Work:
Our diverse workforce enjoys a collegial work environment focused on creative problem solving, where everyone’s opinions and ideas are valued. We are committed to work-life balance, as well as both personal and professional growth. We consider our creative and dedicated scientific professionals to be our greatest assets, and we take pride in cultivating their talents, supporting their efforts, and enabling their successes. We provide mentoring to help new staff build a solid technical and professional foundation, and to smoothly integrate into the culture of LANL.

Los Alamos, New Mexico enjoys excellent weather, clean air, and outstanding public schools. This is a safe, low-crime, family-oriented community with frequent concerts and events as well as quick travel to many top ski resorts, scenic hiking & biking trails, and mountain climbing. The short drive to work includes stunning views of rugged canyons and mesas as well as the Sangre de Cristo mountains. Many employees choose to live in the nearby state capital, Santa Fe, which is known for world-class restaurants, art galleries, and opera.

About LANL:
Located in northern New Mexico, Los Alamos National Laboratory (LANL) is a multidisciplinary research institution engaged in strategic science on behalf of national security. LANL enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

The High Performance Computing (HPC) Division provides production high performance computing systems services to the Laboratory. HPC Division serves all Laboratory programs requiring a world-class high-performance computing capability to enable solutions to complex problems of strategic national interest. Our work starts with the early phases of acquisition, development, and production readiness of HPC platforms, and continues through the maintenance and operation of these systems and the facilities in which they are housed. HPC Division also manages the network, parallel file systems, storage, and visualization infrastructure associated with the HPC platforms. The Division directly supports the Laboratory’s HPC user base and aids, at multiple levels, in the effective use of HPC resources to generate science. Additionally, we engage in research activities that we deem important to our mission.

Los Alamos National Laboratory, Supercomputer Institute Summer Internship Opportunity

By | News, SC2 jobs

PROGRAM OVERVIEW
The Supercomputer Institute is an intense, paid, 11-week, hands-on technical internship for people of all majors interested in the growing field of high-performance computing. You will obtain a thorough introduction to the techniques and practices of HPC; no HPS experience is required.

The program begins with two weeks of “boot camp”. Small teams of interns build, configure, test, and operate an HPC compute cluster starting from scratch, turning a head of equipment, cables, and electricity into a working mini-supercomputer that can run real HPC applications.

Next, the project phase begins. Teams of interns work under the guidance of HPC Division staff mentors on applied research and development projects that address real challenges currently faced by the division. Some projects use the mini-supercomputers built during boot camp, and others use existing LANL resources. These projects regularly influence the division as well as the field of high-performance computing.

Finally, teams present their accomplishments as a poster and technical talk to Laboratory management, staff, and fellow interns in an end-of-summer celebration of intern work.

The program runs June 1, 2021 – August 13, 2021.

View full job post here.

PROFESSIONAL DEVELOPMENT:
In addition to the technical portion of the program, interns also participate in fast-paced, focused professional development work, including:

  • Intense mentoring
  • Teamwork and professional collaboration
  • Resume writing and evaluation
  • Technical poster/presentation design and public speaking
  • Technical seminars on current HPC topics. Past seminars include high-speed networking, Linux containers,
  • parallel filesystems, facilities, and more.
  • Science lectures given by staff from across the Laboratory, from how the Mars Rover works to machine learning/ AI to black hole collisions.
  • Opportunities to sign up for tours of our world-class facilities, including the magnet lab, particle accelerator, million-core supercomputer, and ultra-cold quantum computer.

WHO IS ELIGIBLE TO APPLY:
The program is targeted to rising juniors or seniors, master’s students, and recent graduates with a bachelor’s or associate’s degree. Very highly qualified rising sophomores have been successful in the past, as well as occasional master’s graduates and Ph.D. students who can make a good case that they need hands-on practical training, rather than a research internship.

REQUIRED QUALIFICATIONS:
Interns must meet the following minimum requirements. If you are unsure whether you meet them, please ask us! We don’t want miss someone because they meet requirements in a way we did not anticipate.

  • Computer science, computer engineering, IT, or related experience/training.
  • Intermediate understanding of the Linux OS. For example, this might mean you have basic understanding of how an operating system works, some experience using Linux, and some knowledge of how Linux differs from desktop (e.g., Mac, Window) or phone OSes (Android, iOS).
  • Intermediate command line skills. You should have basic knowledge of the terminal using a shell such as tcsh or Bash. This doesn’t necessarily have to be on Linux (Macs also have a nice command line).
  • Scripting or programming experience of some kind.
  • Collegial, personable, plays well with others; the program is a team sport. Please note this does not mean you have to be “normal”; neurodiversity is encouraged.
  • Well-rounded and curious.
  • Can deal with reasonable deadlines. It’s a fast-paced program, but not high pressure.
  • Meets LANL undergraduate or graduate student program requirements, as applicable.

DESIRED QUALIFICATIONS:
In addition to the above, we’re looking for interns that also have some of the following skills. Note that few interns have all of them.

  • Strong communication skills (written and/or oral).
  • Interesting experience with Linux, hardware, networking, security, filesystems, etc.
  • HPC experience, whether sysadmin or user.
  • C or systems programming experience.
  • Interesting novel perspectives. Can you expand our horizons?

APPLICATION DEADLINE:
Deadline to apply is December 1, 2020.

HOW TO APPLY:
Apply via the instructions on this page. You’ll need to submit the following materials:

  • Current resume
  • Unofficial transcript, including GPA
  • Cover letter describing:
    • Your professional interests, experience, and goals
    • Why you are interested in the Supercomputer Institute
    • How you meet the minimum and desired skills above
    • What you hope to contribute to our team environment

ABOUT LOS ALAMOS:
Los Alamos is a small town in the mountains of northern New Mexico, located an elevation of 7,500 feet.

The town has an active intern community with various events such as free concerts. Outdoor activities are abundant, including hiking, camping, mountain biking, and rock climbing. Summers tend to be warm, and either dry or with afternoon monsoonal thunderstorms.

Stephen Timoshenko Distinguished Postdoctoral Fellowship at Stanford University

By | News, SC2 jobs

Stanford’s Mechanics and Computation Group (Department of Mechanical Engineering) is seeking applicants for a two-year term distinguished postdoctoral fellowship.

 

ABOUT THE FELLOWSHIP:

The Stephen Timoshenko Distinguished Postdoctoral Fellow will be given the opportunity to pursue independent research in the general area of solid mechanics, as well as to contribute to ongoing research in the Mechanics and Computation Group. 

QUALIFICATIONS:

  • Research activities should be in the field of solid mechanics interpreted broadly. 
  • The candidate should be aligned with interests in the group, which include additive manufacturing, micro- and nano-mechanics, and bio-mechanics, with an interest in machine learning as it applies to the field of computational mechanics. 
  • Candidates will be given opportunities to develop their teaching experience by designing and teaching a class in the mechanics curriculum. 
  • This position is primarily targeting candidates who are seeking an academic career in a leading research university.
  • Candidates are expected to show outstanding promise in research, as well as strong interest and ability in teaching. 
  • They must have received a Ph.D. prior to the start of the appointment, but not more than 2 years before. 

APPLICATION DEADLINE: 

Fellowship applications are accepted year-round, with deadlines on October 1, December 1, April 1, and July 1. 

  • Applications received before these dates will be reviewed together. 
  • This position will close as soon as an offer is made and has been accepted by a candidate.

HOW TO APPLY:

Send your application by email to Kelly Chu, kchu22@stanford.edu

  • Email subject: Stephen Timoshenko Distinguished Postdoctoral Fellow search
  • All documents attached to the email should be PDF (Portable Document Format).

Application documents:

  • Cover letter (one page)
  • Curriculum vitae
  • List of publications
  • Brief statements of proposed research (up to three pages) and teaching (one page) 
  • Names and contact information of three recommendation letter writers

EEO STATEMENT:

Stanford is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Stanford welcomes applications from candidates who bring additional dimensions to the University’s research and teaching missions.

Some resources on diversity and inclusion at Stanford:

We welcome 15 students to the 2020-21 class of MICDE graduate fellows

By | Educational, News

MICDE is proud to announce the recipients of the 2020 MICDE graduate fellowships. The fellows’ research projects involve the use and advancement of scientific computing techniques and practices. From political science, psychology, physics, and applied and interdisciplinary mathematics within the College of Literature, Science & the Arts to aerospace engineering, mechanical engineering, materials science engineering, industrial & operations engineering, and civil & environmental engineering within the College of Engineering, the 2020 MICDE fellows epitomize the reach of computation in diverse scientific disciplines.

For the past six years, MICDE has awarded fellowships to over 120 graduate students from our large community of computational scientists. The MICDE graduate student top-off fellowship provides students with a stipend to use for supplies, technology, and other materials that will further their education and research. Among other things, awards have helped many to travel to conferences and meetings around the world to share the rich and diverse research in computational science being carried out at U-M.

The awardees are:

Eytan Adler, Aerospace Engineering
Hessa Al-Thani,
Industrial and Operations Engineering
Zijie Chen,
Mechanical Engineering
Alexander Coppeans
, Aerospace Engineering
Xinyang Dong, Physics
Karthik Ganesan,
Psychology
Iman Javaheri, Aerospace Engineering
Huiwen Jia, Industrial and Operations Engineering
Daeho Kim, Civil and Environmental Engineering
Yudan Liu,
Chemistry
Emily Oliphant
, Materials Science and Engineering
Ryan Sandberg, Applied and Interdisciplinary Mathematics
Patrick Wu, Political Science
Zhucong Xi, Materials Science and Engineering
Yi Zhu, Civil and Environmental Engineering

Learn more about the fellows and the MICDE Fellowship program

Graduate Research Assistantships for Fall 2020 Term in Computational Multiphase/Multi-Physics Projects

By | News, SC2 jobs

Professor Jesse Capecelatro’s Computational Multiphase/Multi-Physics Flow Lab is seeking Three Graduate Students

 

Professor Jesse Capecelatro is a faculty member within the College of Engineering’s Mechanical and Aerospace Engineering departments. Prof. Capecelatro’s lab group is seeking current or recently graduated Master’s or Ph.D. students for paid Research Assistant positions starting in the Fall 2020 term. Read more about Prof. Capecelatro’s research group here.

Research Assistants will be working on one of three projects.

PROJECT #1: MODELING TURBULENT FLOWS WITH FINITE SIZE PARTICLES ON HETEROGENEOUS ARCHITECTURES

Description: The objective of this project is to develop a highly scalable direct numerical simulation (DNS) code that leverages new algorithmic advances in (a) turbulence simulation using a pseudo-spectral approach on heterogeneous architectures and (b) efficient scaling of particle dynamics with number of particles, to perform massive-scale simulations with a mixture of CPUs and GPUs. The student will work with Prof. Capecelatro at UM and collaborators at Iowa State and Georgia Tech. The majority of the code will be written in Fortran 90 and C.

This position is expected to last 1 year in duration with the possibility of extension, and work will be performed remotely. Compensation for this position will be based on experience and qualifications.

Desired Qualifications:

  • Major in Mechanical Engineering, Computer Science, or similar
  • Strong background in fluid mechanics
  • Good knowledge in turbulence
  • Excellent programming skills in a high-performance language like C, Fortran, Python
  • Familiar with parallel computing

PROJECT #2: MULTI-STEP EFFECTIVENESS FACTORS FOR NON-SPHERICAL CATALYSTS

Description: Prof. Capecelatro and his postdoc Aaron Lattanzi will provide support to the graduate student on development of new models for diffusion limited reaction schemes that will be delivered to the National Renewable Energy Laboratory (NREL). The multi-step effectiveness vector (MEV) previously derived by CO-PI Lattanzi will be expanded to account for cylindrical and infinite slab catalyst geometries. Reactant concentration profiles and volume-averaged reaction rates predicted by the new MEV will be directly compared to high-fidelity simulations conducted by NREL to verify the model.

This position is expected to last 9 months in duration with the possibility of extension, and work will be performed remotely. Compensation for this position will be based on experience and qualifications.

Desired Qualifications:

  • Major in Chemical Engineering, Mechanical Engineering, or similar
  • Excellent programming skills in a high-performance language like C, Fortran, Python
  • Strong background in fluid mechanics
  • Familiarity with chemical kinetics (CHE 344. Reaction Engineering and Design or similar class)

PROJECT #3: SENSITIVITY AND UNCERTAINTY QUANTIFICATION OF MODELING PARAMETERS FOR SIMULATING HIGH-SPEED MULTIPHASE FLOWS

Description: The student will perform a literature review on the state-of-the-art in modeling compressible particle-laden flows. Simulations will be performed of shock waves interacting with solid particles using our in-house high-speed multiphase flow solver (Fortran 90). A sensitivity analysis will be performed to quantify the effect of particle statistics on modeling parameters.

This position is expected to last 9 months in duration with the possibility of extension, and work will be performed remotely. Compensation for this position will be based on experience and qualifications.

Desired Qualifications:

  • Major in Aerospace Engineering, Mechanical Engineering, or similar
  • Excellent programming skills in a high-performance language like C, Fortran, Python
  • Familiar with uncertainty quantification, tools for sensitivity analyses
  • Strong background in fluid mechanics
  • United States citizenship

APPLY  TODAY!
Please send your CV, transcript, and a brief statement about your interests and background relative to the projects listed above to Professor Jesse Capecelatro jcaps@umich.edu with subject, “Fall 2020 Research Assistantship”.

Graduate Research Assistantships for Fall 2020 Term in Physics-based Data-driven Modeling Projects

By | News, SC2 jobs

Professor Julie Young’s Lab Seeking Two Engineering-focused Grad Students to Assist in Modeling Research

Professor Julie Young is a faculty member within the College of Engineering’s Naval Architecture and Marine Engineering, Mechanical Engineering, and Aerospace Engineering departments. Professor Young’s lab group is seeking graduate students (current or recently graduated master’s or Ph.D.’s) for paid Research Assistant positions starting in the Fall 2020 term. The expected time commitment for these positions is 20 hours per week.

Students will be working on one of two projects:

Project #1 Description: Development of a physics-based data-driven model for system identification and control of lifting surfaces in multiphase flow.

Project #1 Desired Qualifications:

  • Excellent programming skills
  • Good knowledge of system identification
  • Familiarity with data-driven models, control methods
  • Familiarity with experimental modeling and data analysis
  • Good knowledge of nonlinear fluid and structural dynamics
  • Engineering major or extensive coursework in engineering-related field
  • United States citizenship

Project #2 Description: Development of physics-based data-driven model for marine ship-propulsion system.

Project #2 Desired Qualifications:

  • Excellent programming skills
  • Good knowledge of system identification and data-driven models
  • Familiarity with experimental modeling and data analysis
  • Good knowledge of propulsion systems
  • Engineering major or extensive coursework in engineering-related field
  • United States citizenship

Compensation:
Compensation for these positions will be commensurate with experience and qualifications.

Apply Today!
Please send your CV, transcript, and a brief statement about your interests and background relative to the projects listed above to Professor Julie Young at ylyoung@umich.edu with subject, “Fall 2020 Research Assistantship”.

Alternatives Research & Development Foundation to Support Research on COVID-19, Aiming for Advancement in Non-animal Methods of Drug Discovery

By | News, Research

Pharmaceutical companies across the globe are racing to introduce clinically tested and approved therapeutic drugs that fight COVID-19 virus to market. As is typical in drug discovery research, animals have played a critical role in the development and testing of COVID-19 therapeutics. A proposal by U-M Professor Rudy J. Richardson, Dow Professor Emeritus of Toxicology, Professor Emeritus of Environmental Health Sciences, and Associate Professor Emeritus of Neurology at the University of Michigan, titled “Discovering host factor inhibitors in silico for SARS-CoV-2 entry and replication” has been awarded funding to identify compounds that bind to human proteins that facilitate entry and/or replication of the SARS-CoV-2 virus. Awarded, in part, because of its potential to develop alternative methods to advance science and replace or reduce animal use, this research will employ in silico ligand protein docking to discover existing drugs (repurposing) and/or new drug candidates capable of inhibiting host proteins involved in infection pathways for the COVID-19 virus, SARS-CoV-2.

Protein docking targets include four serine hydrolases. Using these targets, researchers will reversibly dock approximately 40,000 ligands from the Binding Database comprising FDA-approved drugs along with serine protease and PLA2 inhibitors, including organoboron compounds. Then, covalent docking will be conducted on a ligand subset containing pharmacophores capable of covalently binding serine hydrolases. Consensus ranking from four docking programs will be used to generate a penultimate list of candidate compounds. Those showing high predicted potency against off-target serine hydrolases will be excluded. The final list of compounds will be made publicly available for further evaluation in bioassays.

Professor Richardson’s grant, awarded by the Alternatives Research & Development Foundation, is a part of the ARDF’s 2020 Open Grants program, funding research projects that develop alternate methods to advance science and replace or reduce animal use. Although the immediate goal of this computational study supports the identification or development of a COVID-19 vaccine, the long-range vision is to advance computational and in vitro approaches to eliminate animal use from drug discovery for humans and other species. 

MICDE Affiliated Faculty member Rudy J. Richardson is a Dow Professor Emeritus of Toxicology and Professor Emeritus of Environmental Health Sciences within the School of Public Health, and Associate Professor Emeritus of Neurology within the Medical School at the University of Michigan.