ARC offers HPC consulting service

By | General Interest, News

ARC now provides a consulting service to assist researchers with deploying and running codes on HPC clusters such as Flux. The service assists with adapting scientific codes to parallel environments, making efficient use of HPC resources, scaling up codes, locating current and emerging performance bottlenecks, identifying hot spots, and parallelizing performance-critical parts.

Contact Alexander Gaenko <hpc-consulting@umich.edu> to set up an appointment for a high-level review of your code, including the computational methodology adopted, the operating system and/or platform the code was written for, the programming language(s) used, and the parallelization model used (e.g., MPI or OpenMP, if applicable). If the code exhibits performance or scalability problems, information documenting the behavior, e.g. whether the problematic code and/or use case is memory-, CPU- or I/O-bound, and the outcome of any performance improvements already attempted, should be provided as well.

ARC’s HPC consulting service provides short-term resolutions to code performance issues. If more sustained consultation is required, ARC will work with you on a support mechanism to facilitate a dedicated effort by our consulting team.

2015-2016 MICDE Fellowship winners announced

By | General Interest, News

The Michigan Institute for Computational Discovery and Engineering is proud to announce the winners of the 2015-2016 MICDE fellowships, for students enrolled in graduate studies in computational science. For the first time this year, students in both the Ph.D in Scientific Computing and the Graduate Certificate in Computational Discovery and Engineering were eligible for the fellowships.

“MICDE received more than 70 applications for fellowships this year, and the quality across the board was remarkable,” said Ken Powell, Arthur F. Thurnau Professor of Aerospace Engineering and MICDE Associate Director for Education. “We are extremely pleased to be able to help these outstanding students with their ongoing research.”

Winners receive a $4,000 stipend that can be used for conference attendance, hardware purchases, or any research-related uses agreed upon by their advisors.

This year’s fellows in the Graduate Certificate program are:

  • Kevin Bakker, Ecology and Evolutionary Biology and Public Health (Advisors: Mercedes Pascual and Marisa Eisenberg)
  • Federica Cuomo, Biomedical Engineering and Vascular Surgery (Advisor: Alberto Figueroa)
  • Chuanfei Dong, Atmospheric, Oceanic, and Space Sciences (Advisor: Stephen Bougher)
  • Daniel Nunez, Nuclear Engineering and Radiological Sciences (Advisor: Annalisa Manera)
  • Jeff Shi, Ecology and Evolutionary Biology (Advisor: Daniel Rabosky)
  • Lois Smith, Atmospheric, Oceanic, and Space Sciences (Advisor: Michael Liemohn)

Fellows in the Ph.D program are:

  • Joseph Paki, Physics (Advisor: Emanuel Gull)
  • Devina Sanjaya, Aerospace Engineering (Advisor: Krzysztof Fidkowski)
  • Shaosui Xu, Atmospheric, Oceanic, and Space Sciences (Advisor: Michael Liemohn)

Congratulations to the recipients!

Flux HPC Blog: Large-scale Visualization of Volumes from 2D Images

By | General Interest, News

The Visible Human project has a series of high resolution CT or MRI scans of human bodies.  These images can be stitched together to make volume renderings of the original subject.


These images were generated from high resolution CT scans available here at Michigan.  The data in this case is over 5000 2d slices in TIFF format for total data of around 34GB.

On standard systems working with the input data of this size is difficult let alone the derived 3d volume created.  Lucky for us we can use the Visit imgvol format specifically for this case.

In the above example 32 cores with 25GB of memory each (800GB total) on the Flux Large Memory nodes was used and my personal Apple laptop running the Visit viewer over a home network connection (!!).  Memory use in the creation of the above plots ranged from 3GB/core to 7.5GB/core.   Rendering performance wasn’t interactive, but a plot change would range from 15-45 seconds to redraw.

The imgvol format is very simple and allowed for us to create these sorts of plots very quickly.  Most users don’t have such huge data and can run this on their personal lab workstations.  If your workstation isn’t sufficient feel free to reach out to ARC-TS at hpc-support@umich.edu

CASC brochure image contest — May 31 deadline

By | Educational, Events

The Coalition for Academic Scientific Computing (CASC) is holding its annual contest for images for its yearly brochure that is distributed to its 82 member universities, federal funding agencies, members of Congress, peer institutions, and at events throughout the year.

The images and research featured in this brochure are selected through an open competition among CASC members (including U-M).

Pictures must be at least 300 dpi, and be accompanied by a paragraph explaining the image and naming the computational system on which it was created (e.g., Flux). The pictures will be judged on the following criteria:

  • illustrative of research underway
  • timeliness
  • intellectual merit
  • scientific, cultural or economic impact
  • compelling, visually interesting images.

All images from last year’s brochure competition are available for viewing on the web. To submit images for this year’s competition, send image files and short descriptions to ARC Communications Specialist Dan Meisler <dmeisler@umich.edu>, who will forward them to CASC.

The deadline is May 31.

Open meeting for HPC users at U-M — May 22

By | Educational, Events

Users of high performance computing resources are invited to meet Flux operators and support staff in person at an upcoming user meeting:

  • Friday, May 22, 1 – 5 p.m., NCRC Building 520, Room 1122 (Directions)

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Nyx, XSEDE, or other.

Ask any questions you may have. The Flux staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples potential topics:

  • What Flux/ARC services are there, and how to access them?
  • How to make the most of PBS and learn its features specific to your work?
  • I want to do X, do you have software capable of it?
  • What is special about GPU/Xeon Phi/Accelerators?
  • Are there resources for people without budgets?
  • I want to apply for grant X, but it has certain limitations. What support can ARC provide?
  • I want to learn more about the compiler and debugging?
  • I want to learn more about performance tuning, can you look at my code with me?
  • Etc.

For more information, contact Brock Palen (brockp@umich.edu) at the College of Engineering; Dr. Charles Antonelli (cja@umich.edu) at LSA; Jeremy Hallum (jhallum@umich.edu) at the Medical School; or Vlad Wielbut (wlodek@umich.edu) at SPH.

Seminar: Monitoring and Modeling Great Lakes Water, Drew Gronewold, NOAA-GLERL — May 21

By | Educational, Events

SPEAKER: Drew Gronewold, NOAA-GLERL

TITLE : Monitoring and modeling the water budget and water levels of Earth’s largest lake system

TIME: 10 – 11 am ET on Thursday, May 21

LOCATION: School of Natural Resources and Environment, Dana Building Room 1040

ABSTRACT: The North American Great Lakes constitute the largest surface of fresh water on Earth (Lakes Superior and Michigan-Huron alone are the two largest lakes on Earth by surface area).  Monitoring and modeling the major components of the Great Lakes water budget, including over-lake precipitation, over-lake evaporation, and runoff, involves an international, multi-institution partnership that leverages a complex combination of sensor networks and modeling platforms.  In this presentation, we provide an overview of the drivers behind long-term changes in Great Lakes water levels, including findings from recent research focused on explaining the abrupt water level decline on Lakes Superior and Michigan-Huron in the late 1990s, and the recent record-setting water level surge.  Insights from this research underscore the sensitivity of large freshwater systems to regional climate perturbations, and the need for improved understanding of how the future of these systems will be dictated by a combination of climate change, human intervention, and changes in consumptive use.​

“Enlighten Your Research” program seeks proposals for enhanced global networking capabilities — May 29 deadline

By | Educational, Events, Funding Opportunities

Enlighten Your Research Global is a program organized by 13 global National Research and Education Networks (NRENs), with the goal of identifying research programs that could significantly benefit from enhanced global network connectivity. The group aims to foster international collaborations to accelerate the research and discovery process. To promote the benefits of international-scale networking to researchers, EYR Global is challenging researchers to stretch the boundaries of their science and collaborate with other countries to perform experiments enabled by the NREN’s world-class network infrastructure.

The EYR-Global program awards select projects with expert engineering support, consultation, and collaboration to guide and enhance researchers’ workflows.

The selected proposals will receive networking resources that they need for their project for one year. The EYR partner NRENs, along with any affiliate partners, will provide and develop high-quality network services for all higher education and research institutions in their respective countries.

The awarded teams may receive the following services:

  1. High-performance network infrastructures operated by the participating NRENs and their partners;
  2. Support and consultation with expert network engineers on devising the best end-to-end network connectivity plan to support the proposed research;
  3. Commitment from each participating NREN to an agreed level of network resource provisioning and ongoing support during the program’s allotted time;
  4. The required network connection will be delivered to the NREN termination point at the institution (if present). The institution is responsible for extending the connection within the institution and for the required infrastructure.

The initial application deadline is May 29. Visit the EYR Global website for more information and submission details.

SC15 Doctoral Showcase Program — July 31 deadline

By | Educational, Events

The International Conference for High Performance Computing, Networking, Storage, and Analysis will return to Austin, Texas-U.S. for its 27th annual conference, SC15. As part of the Technical Program, the Doctoral Showcase provides an opportunity for students near the end of their Ph.D. to present a summary of their dissertation research in the form of short talks and posters.

Unlike technical paper and poster presentations, the Doctoral Showcase highlights the entire contents of each dissertation, including previously published results, to allow for a broad perspective of the work. Authors of accepted submissions will be invited to present their work at the conference.

Deadline: Friday, July 31
Email Contact: doc-showcase@info.supercomputing.org
Web Submissions:
 https://submissions.supercomputing.org/

Click here for more on Doctoral Showcase Program.

HPC workshops on campus — May 11, 14, and 18

By | Educational, Events

The spring schedule has been set for on-campus high performance computing workshops sponsored by ARC.

HPC100 — Introduction to the Linux Command Line for HPC
1 – 4 p.m., B737 East Hall
Monday, May 11
This course will familiarize students with the basics of accessing and interacting with high-performance computers using the GNU/Linux operating system’s command line. For more information, and to register, visit this page.

HPC101 — High Performance Computing Workshop
1 – 5 p.m., B737 East Hall
Thursday, May 14
This course provides an overview of cluster computing in general and how to use the Flux cluster in particular. (Prerequisite: HPC 100 or equivalent.)
For more information, and to register, visit this page.

HPC201 — Advanced High Performance Computing Workshop
1 – 5 p.m., B737 East Hall
Monday, May 18
This course will cover some more advanced topics in cluster computing on the U-M Flux Cluster. Topics to be covered include a review of common parallel programming models and basic use of Flux; dependent and array scheduling; advanced troubleshooting and analysis using checkjob, qstat, and other tools; use of common scientific applications including Python, MATLAB, and R in parallel environments; parallel debugging and profiling of C and Fortran code, including logging, gdb (line-oriented debugging), ddt (GUI-based debugging) and map (GUI-based profiling) of MPI and OpenMP programs; and an introduction to using GPUs. (Prerequisite: HPC101 or equivalent.)
For more information, and to register, visit this page.