U-M professor Quentin Stout, a veteran of all 28 Supercomputing conferences, reflects on SC through the years

By | Uncategorized | No Comments
StoutJablonowski

Quentin Stout and Christian Jablonowski teaching the Parallel Computing 101 tutorial at SC07.

Quentin Stout, University of Michigan Professor of Computer Science and Engineering (CSE) and Climate and Space Sciences and Engineering (CLaSP), has attended all 28 of the Supercomputing conferences since the event begin in 1988. Stout is one of less than 20 so-called “SC Perennials” to have attended every one. He, along with Christiane Jablonowski, associate professor in CLaSP, have taught the Introduction to Parallel Computing tutorial at the conference for many years and are teaching it again this year. Stout, who has been at U-M since 1984, recently answered some questions about the evolution of the field of computer science and the area of supercomputing over the decades.

Question: What was the first SC conference like, and how has it changed over the years?

Stout: The first conference, in 1988, had about 1,500 people, compared to the over 10,000 now. Its focus was on supercomputing and the large centers at DOE, NASA, NSF, etc., along with the companies that were making these systems. There were also some researchers from academia and a few industrial users. The largest supercomputer user, NSA, had people at the conference, but they didn’t have a booth and their badge listed “Fort Meade” as their affiliation.

Over the years it has greatly broadened its scope to have a much broader international focus and more participation by universities, cluster vendors of all sizes, networking, storage, commercial software, educational efforts, etc. …

Originally I went to learn more about the field, meet people, see what the emerging areas were, and learn about the latest machines. I still go for these reasons, but now machines and software are improving in a more evolutionary fashion than the somewhat revolutionary changes at the beginning. Going from serial computers to vector or parallel ones was more exciting and groundbreaking than going from 100,000 cores to 1,000,000, though the latter is still challenging. Some things have stayed the same: the parties are still good, and companies are still entering and leaving the supercomputing area. For quite some time, if I brought home a coffee mug from a company, the company would go bankrupt in a few years. More recently, IBM developed the BlueGene series of machines, and grabbed the #1 spot in the top 500 rating of machines, but then dropped out of the market because it wasn’t selling enough machines to recoup the tremendous design cost.

One thing that has happened in computing field, not just the conference, is that scientific computing has a far smaller share of the market, even if you only consider the market for large systems. There have always been large database systems in corporations, but data analytics has greatly expanded the possibilities for profit, and hence there is more investment.

Question: What do you predict for the future of supercomputing?

Stout: The most “super” computers aren’t really single computers, but systems such as Google where they are continually processing a vast number of queries, answering them in fractions of a second by using sophisticated algorithms that combining myriad sources from throughout the world, all run on highly tuned systems that keep running even though they have so many components that they are always having to deal with faulty ones. The production users of supercomputers tend to submit a job, let it run for a long time, analyze the results (perhaps using sophisticated graphics), fix some errors or change some parameters, repeat. This isn’t the same as systems which are constantly ingesting data, analyzing it using algorithms that incorporate learning components, responding to increasingly complex queries. Academics, including some at U-M, are involved in this, but it is difficult to create even a scaled down version of a complete system in an academic computing center. You can view IBM’s Watson as being in this arena, and IBM is now betting that Watson will be a large part of its future.

Here’s an interesting cycle in computing: for over a decade some computational scientists and engineers have been using GPUs (graphics processing units). They are very difficult to use, and only applicable to certain types of problems, but inexpensive in terms of flops/$. However, many scientific computations require double precession arithmetic, which isn’t needed for graphics. Companies like NVIDIA, responding to the scientific computing market, began producing more expensive GPUs with double precision, and now systems such as U-M’s Flux computing cluster include GPUs on some of their boards.

However, there is a very rapidly growing demand for “deep learning.” The computationally intensive components of this can be run on GPUs relatively easily, but they don’t need double precision, just speed and plenty of parallelism. This summer NVIDIA released a new high-end chip with good double precision performance, but also added half precision, since that is all that is needed for deep learning. Deep learning might well surpass scientific computing as a GPU market.

 [NOTE: Visit the University of Michigan at SC16 at booth 1543.]

 

U-M prepares for SC16 conference in Salt Lake City

By | Uncategorized | No Comments

University of Michigan researchers and professional research IT staff will participate in the SC16 conference in Salt Lake City from Nov. 13-17 in a number of ways, including demonstrations, presentations and tutorials. Please join us at booth 1543 if you’re at the conference, or at one of the following events:

Sunday, Nov. 13
8:30 a.m. – 5 p.m.: Quentin Stout (EECS) and Christiane Jablonowski (CLASP) will teach the “Parallel Computing 101” tutorial.

Monday, Nov. 14 through Thursday, Nov. 17
U-M will exhibit at booth #1543 alongside Michigan State University. The booth will include an ongoing demonstration of the OSiRIS networking and storage project; information on the Yottabyte Research Cloud; and a presentation on ConFlux.

Tuesday, Nov. 15
10:30 a.m.: Todd Raeker, Research Technology Consultant, ARC-TS, will give a talk on ConFlux at the NVIDIA booth (#2231).
11 a.m.: Project PI Shawn McKee (Physics) will give a presentation on OSiRIS at the U-M booth (#1543).
2:15 p.m.: Nilmini Abeyratne, a Ph.D student in computer science, will present “Low Design-Risk Checkpointing Storage Solution for Exascale Supercomputers” at the Doctoral Showcase.
1 – 5 p.m.: Todd Raeker, Research Technology Consultant, ARC-TS, will participate in the IBM Power8 University Group Meeting.
3 p.m.: Representatives from Yottabyte and ARC-TS will give a presentation on the Yottabyte Research Cloud.
3:30 – 5 p.m., Sharon Broude Geva, Director of Advanced Research Computing, will participate in a panel titled “HPC Workforce Development: How Do We Find Them, Recruit Them, and Teach Them to Be Today’s Practitioners and Tomorrow’s Leaders?

Wednesday, Nov. 16
10 a.m.: Representatives from Yottabyte and ARC-TS will give a presentation on the Yottabyte Research Cloud.
11 a.m.: Ben Meekhof, HPC Storage Administrator, ARC-TS, will give a presentation on OSiRIS at the U-M booth (#1543).
1 p.m.: Todd Raeker, Research Technology Consultant, ARC-TS, will give a talk on ConFlux at the U-M booth (#1543).
5:15 – 7 p.m.: Ben Meekhof, HPC Storage Administrator, ARC-TS, will present at a “Birds of a Feather” meeting on “Ceph in HPC Environments.”

Thursday, Nov. 17
11 a.m.: Project PI Shawn McKee (Physics) will give a presentation on OSiRIS at the U-M booth (#1543).
1 p.m.: Todd Raeker, Research Technology Consultant, ARC-TS, will give a talk on ConFlux at the U-M booth (#1543).

Ann Arbor Deep Learning annual event — Nov. 12

By | Uncategorized | No Comments

a2-dlearn2016 is an annual event bringing together deep learning enthusiasts, researchers and practitioners from a variety of backgrounds.

MIDAS is proud to co-sponsor the event, which began last year as a collaboration between the Ann Arbor – Natural Language Processing and Machine Learning: Data, Science and Industry meetup groups.

The event will include speakers from the University of Michigan, University of Toronto, Toyota Research Institute and MDA Information Systems.

Please visit the event website for more information. Registration is required as space is limited.

HPC User Meetups set for October, November and December

By | Uncategorized | No Comments

Users of high performance computing resources are invited to meet ARC-TS HPC operators and support staff in person at an upcoming user meeting:

  • Monday, October 17, 1:10 – 5 p.m., 2001 LSA Building (500 S. State St.)
  • Wednesday, November 9, 1 – 5 p.m., 1180 Duderstadt Center (2281 Bonisteel Blvd., North Campus)
  • Monday, December 12, 1 – 5 p.m., 4515 Biomedical Science Research Building (BSRB, 109 Zina Pitcher Pl.)

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Armis, Hadoop, XSEDE, Amazon, or other.

Ask any questions you may have. The ARC-TS staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples of potential topics:

  • What ARC-TS services are there, and how to access them?
  • I want to do X, do you have software capable of it?
  • What is special about GPU/Xeon Phi/Accelerators?
  • Are there resources for people without budgets?
  • I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
  • I want to learn more about the compiler and debugging?
  • I want to learn more about performance tuning, can you look at my code with me?
  • Etc.

MICDE parnters with non-profit miRcore

By | Uncategorized | No Comments

MICDE has partnered with miRcore, a non-profit organization whose mission is to democratize medical research by building funds for microgrants to support innovative genetic research.

MICDE is providing computational resources and support for miRcore outreach activities, as well as connecting our faculty to the miRcore team to provide expertise and to teach students about personalized medicine and end-user driven research.

IMG_20160604_150320

MICDE affiliated Prof. Barry Grant at GIDAS 2016 Research Conference poster session

On June 6, MICDE affiliated faculty member Barry Grant joined miRcore’s high school club GIDAS (Genes In Diseases And Symptoms) in their 2016 Research Conference. The students had the opportunity to experience a real conference setting that helped build their interest in science. The students had a chance to give a talk about a research project, present a poster and hands-on workshops. The conference was a huge success. You can read the proceedings on miRcore’s site.

From August 8-12, 2016, MICDE and ARC-TS will donate a Flux allocation and computational support to the GIDAS’ Biotechnology Camp for high school students. The donations will provide students the opportunity to become familiar with the Unix command line and get hands-on experience on computational genomics.

U-M to host XSEDE Boot Camp on MPI, Open MP, OpenACC, and more — June 16-19

By | Uncategorized | No Comments

XSEDE, along with the Pittsburgh Supercomputing Center and the National Center for Supercomputing Applications at the University of Illinois will be presenting a Hybrid Computing workshop.

This 4 day event will include MPI, OpenMP, OpenACC and accelerators and run June 16-19. We will conclude with a special hybrid exercise contest that will challenge the students to apply their skills over the following 3 weeks and be awarded the Second Annual XSEDE Summer Boot Camp Championship Trophy.

Due to demand, this workshop will be telecast to several satellite sites, including U-M. This workshop is NOT available via a webcast. The workshop will be telecast at 1255 North Quad.

Registration is required: visit the XSEDE registration site.

Agenda (all times Eastern):

Tuesday, June 16
11:00 Welcome
11:15 Computing Environment
11:45 Intro to Parallel Computing
12:30 Intro to OpenMP
1:30 Lunch Break
2:30 Exercise 1
3:15 More OpenMP
4:30 Exercise 2
5:00 Adjourn

Wednesday, June 17
11:00 Intro to OpenACC
12:00 Exercise 1
12:30 Introduction to OpenACC (cont.)
1:00 Lunch Break
2:00 Exercise 2
2:45 Introduction to OpenACC (cont.)
3:00 Using OpenACC with CUDA Libraries
3:30 Advanced OpenACC
4:00 OpenMP 4.0 Sneek Peek
5:00 Adjourn

Thursday, June 18
11:00 Introduction to MPI
1:00 Lunch break
2:00 Intro Exercises
3:10 Intro Exercises Review
3:15 Scalable Programming: Laplace code
3:45 Laplace Exercise
5:00 Adjourn

Friday, June 19
11:00 Laplace Exercise Review
12:30 Laplace Solution
1:00 Lunch break
2:00 Advanced MPI
3:00 Outro to Parallel Computing
4:00 Hybrid Computing
4:30 Hybrid Competition
5:00 Adjourn