With input from MIDAS, four research teams from the University of Michigan and Shanghai Jiao Tong University in China are sharing $800,000 in awards to study depression, electric vehicles, urban green space and bone cancer.
Representatives of Consulting for Statistics, Computing and Analytics Research (CSCAR) and the U-M Library (UML) will give an overview of services that are now available to support data-intensive research on campus. As part of the U-M Data Science Initiative, CSCAR and UML are expanding their scopes and adding capacity to support a wide range of research involving data and computation. This includes consulting, workshops, and training designed to meet basic and advanced needs in data management and analysis, as well as specialized support for areas such as remote sensing and geospatial analyses, and a funding program for dataset acquisitions. Many of these services are available free of charge to U-M researchers.
This event will begin with overview presentations about CSCAR and Library system data services. There will also be opportunities for researchers to discuss individualized partnerships with CSCAR and UML to advance specific data-intensive projects. Faculty, staff, and students are welcome to attend.
Time/Date: 4-5 p.m., November 1,
Location: Earl Lewis Room, Rackham Building
By Bob Brustman, U-M Civil and Environmental Engineering Department
University of Michigan researchers have received a $2.5 million NSF grant to develop a computational model that is hoped to significantly advance natural hazards engineering and disaster science.
Natural hazards engineers study earthquakes, tornadoes, hurricanes, tsunamis, landslides, and other disasters. They work to better understand the causes and effects of these phenomena on cities, homes, and infrastructure and develop strategies to save lives and mitigate damage.
Sherif El-Tawil, the lead PI for the project, is a structural engineer interested in how buildings behave, particularly in natural or man-made disasters. He’s developed 3D models and simulators that show precisely what happens in a building if a particular column or wall is destroyed during an extreme event.
On the project team are Jason McCormick, an earthquake engineering expert, Seymour Spence, who has expertise in wind engineering, and Benigno Aguirre, who is a social scientist interested in how people behave during catastrophes. The rest of the team includes. Vineet Kamat, Carol Menassa, and Atul Prakash, who will develop the simulation techniques used in the project.
The researchers of this newly funded project are creating a computational framework, using the Flux high performance computing cluster, that will define a set of standards for disaster researchers to use when constructing their models, enabling simulation models to work together.
El-Tawil explains: “Disaster research is a thriving area because disasters affect so many people worldwide and there is a lot we can do to reduce loss of life and damage to our civil infrastructure.”
“Lots of researchers study disasters, including engineers like me, but also social scientists, economists, doctors, and others. But all of the studies are essentially niche studies, belonging in the field of the researchers. Our objective is to develop computational standards so that social scientists, engineers, economists, doctors, first responders, and everyone else can produce simulators that interact together in a large, all-encompassing simulation of a disaster scenario. Think of it as the civilian equivalent of a war games simulator.”
“Developing this common computational language will allow completely new studies to occur. Someone might look at the effects of an earthquake on a particular town and its citizens and then the subsequent effects of infectious diseases. With a common language, we can really examine the cascading and potentially out-of-control effects that occur during catastrophic events.”
Beyond developing the computational standards, they hope to create something like an app store through which researchers can share their simulation models and foster new collaborations and new areas of research.
The grant also includes funding for a programmer housed at Advanced Research Computing (ARC) that will become a shared resource for the rest of campus. The Michigan Institute for Computational Discovery and Engineering (MICDE) provided support for the grant submission, and will continue to do so post-award.
The project brings together an experienced team with expertise in engineering, social science, and computer science. Six of the seven core members are from the University of Michigan and the seventh is from the University of Delaware.
- Benigno Aguirre, professor, Disaster Research Center, University of Delaware
- Sherif El-Tawil, professor, Department of Civil and Environmental Engineering, University of Michigan
- Vineet Kamat, professor, Department of Civil and Environmental Engineering, University of Michigan
- Jason McCormick, associate professor, Department of Civil and Environmental Engineering, University of Michigan
- Carol Menassa, associate professor, Department of Civil and Environmental Engineering, University of Michigan
- Atul Prakash, professor, Department of Electrical Engineering and Computer Science, University of Michigan
- Seymour Spence, assistant professor, Department of Civil and Environmental Engineering, University of Michigan
Users of high performance computing resources are invited to meet ARC-TS HPC operators and support staff in person at an upcoming user meeting:
- Monday, October 17, 1:10 – 5 p.m., 2001 LSA Building (500 S. State St.)
- Wednesday, November 9, 1 – 5 p.m., 1180 Duderstadt Center (2281 Bonisteel Blvd., North Campus)
- Monday, December 12, 1 – 5 p.m., 4515 Biomedical Science Research Building (BSRB, 109 Zina Pitcher Pl.)
There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Armis, Hadoop, XSEDE, Amazon, or other.
Ask any questions you may have. The ARC-TS staff will work with you on your specific projects, or just show you new things that can help you optimize your research.
This is also a good time to meet other researchers doing similar work.
This is open to anyone interested; it is not limited to Flux users.
Examples of potential topics:
- What ARC-TS services are there, and how to access them?
- I want to do X, do you have software capable of it?
- What is special about GPU/Xeon Phi/Accelerators?
- Are there resources for people without budgets?
- I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
- I want to learn more about the compiler and debugging?
- I want to learn more about performance tuning, can you look at my code with me?
Can cloud computing systems help make climate models easier to run? Assistant research scientist Xiuhong Chen and MICDE affiliated faculty Xianglei Huang, from Climate and Space Sciences and Engineering (CLASP), provide some answers to this question in an upcoming issue of Computers & Geoscience (Vol. 98, Jan. 2017, online publication link: http://dx.doi.org/10.1016/j.cageo.2016.09.014).
Teaming up with co-authors Dr. Chaoyi Jiao and Prof. Mark Flanner, also in CLASP, as well as Brock Palen and Todd Raeker from U-M’s Advanced Research Computing – Technology Services (ARC-TS), they compared the reliability and efficiency of Amazon’s Web Service – Elastic Compute 2 (AWS EC2) with U-M’s Flux high performance computing (HPC) cluster in running the Community Earth System Model (CESM), a flagship climate model in the U.S. developed by the National Center for Atmospheric Research.
The team was able to run the CESM in parallel on an AWS EC2 virtual cluster with minimal packaging and code compiling effort, finding that the AWS EC2 can render a parallelization efficiency comparable to Flux, the U-M HPC cluster, when using up to 64 cores. When using more than 64 cores, the communication time between virtual EC2 nodes exceeded the communication time in Flux.
Until now, climate and earth systems simulations had relied on numerical model suites that run on thousands of dedicated HPC cores for hours, days or weeks, depending on the size and scale of each model. Although these HPC resources have the advantage of being supported and maintained by trained IT support staff, making them easier to use them, they are expensive and not readily available to every investigator that needs them.
Furthermore, the systems within reach are sometimes not large enough to run simulations at the desired scales. Commercial cloud systems, on the other hand, are cheaper and accessible to everyone, and have grown significantly in the last few years. One potential drawback of cloud systems is that the user needs to provide and install all the software and the IT expertise needed to run the simulations’ packages.
Chen and Huang’s work represents an important first step in the use of cloud computing in large-scale climate simulations. Now, cloud computing systems can be considered a viable alternate option to traditional HPC clusters for computational research, potentially allowing researchers to leverage the computational power offered by a cloud environment.
This study was sponsored by the Amazon Climate Initiative through a grant awarded to Prof. Huang. The local simulation in U-M was made possible by a DoE grant awarded to Prof. Huang.
Top image: http://www.cesm.ucar.edu/
The Michigan Institute for Data Science (MIDAS) hosted Dr. Gary King of Harvard University for a talk titled “Big Data is Not About the Data!” on Friday, Oct. 3 as part of the MIDAS Seminar Series.
Video of the talk is now available for viewing online.
For a schedule of upcoming MIDAS Seminars, visit the seminar webpage.