StoutJablonowski

Quentin Stout and Christian Jablonowski teaching the Parallel Computing 101 tutorial at SC07.

Quentin Stout, University of Michigan Professor of Computer Science and Engineering (CSE) and Climate and Space Sciences and Engineering (CLaSP), has attended all 28 of the Supercomputing conferences since the event begin in 1988. Stout is one of less than 20 so-called “SC Perennials” to have attended every one. He, along with Christiane Jablonowski, associate professor in CLaSP, have taught the Introduction to Parallel Computing tutorial at the conference for many years and are teaching it again this year. Stout, who has been at U-M since 1984, recently answered some questions about the evolution of the field of computer science and the area of supercomputing over the decades.

Question: What was the first SC conference like, and how has it changed over the years?

Stout: The first conference, in 1988, had about 1,500 people, compared to the over 10,000 now. Its focus was on supercomputing and the large centers at DOE, NASA, NSF, etc., along with the companies that were making these systems. There were also some researchers from academia and a few industrial users. The largest supercomputer user, NSA, had people at the conference, but they didn’t have a booth and their badge listed “Fort Meade” as their affiliation.

Over the years it has greatly broadened its scope to have a much broader international focus and more participation by universities, cluster vendors of all sizes, networking, storage, commercial software, educational efforts, etc. …

Originally I went to learn more about the field, meet people, see what the emerging areas were, and learn about the latest machines. I still go for these reasons, but now machines and software are improving in a more evolutionary fashion than the somewhat revolutionary changes at the beginning. Going from serial computers to vector or parallel ones was more exciting and groundbreaking than going from 100,000 cores to 1,000,000, though the latter is still challenging. Some things have stayed the same: the parties are still good, and companies are still entering and leaving the supercomputing area. For quite some time, if I brought home a coffee mug from a company, the company would go bankrupt in a few years. More recently, IBM developed the BlueGene series of machines, and grabbed the #1 spot in the top 500 rating of machines, but then dropped out of the market because it wasn’t selling enough machines to recoup the tremendous design cost.

One thing that has happened in computing field, not just the conference, is that scientific computing has a far smaller share of the market, even if you only consider the market for large systems. There have always been large database systems in corporations, but data analytics has greatly expanded the possibilities for profit, and hence there is more investment.

Question: What do you predict for the future of supercomputing?

Stout: The most “super” computers aren’t really single computers, but systems such as Google where they are continually processing a vast number of queries, answering them in fractions of a second by using sophisticated algorithms that combining myriad sources from throughout the world, all run on highly tuned systems that keep running even though they have so many components that they are always having to deal with faulty ones. The production users of supercomputers tend to submit a job, let it run for a long time, analyze the results (perhaps using sophisticated graphics), fix some errors or change some parameters, repeat. This isn’t the same as systems which are constantly ingesting data, analyzing it using algorithms that incorporate learning components, responding to increasingly complex queries. Academics, including some at U-M, are involved in this, but it is difficult to create even a scaled down version of a complete system in an academic computing center. You can view IBM’s Watson as being in this arena, and IBM is now betting that Watson will be a large part of its future.

Here’s an interesting cycle in computing: for over a decade some computational scientists and engineers have been using GPUs (graphics processing units). They are very difficult to use, and only applicable to certain types of problems, but inexpensive in terms of flops/$. However, many scientific computations require double precession arithmetic, which isn’t needed for graphics. Companies like NVIDIA, responding to the scientific computing market, began producing more expensive GPUs with double precision, and now systems such as U-M’s Flux computing cluster include GPUs on some of their boards.

However, there is a very rapidly growing demand for “deep learning.” The computationally intensive components of this can be run on GPUs relatively easily, but they don’t need double precision, just speed and plenty of parallelism. This summer NVIDIA released a new high-end chip with good double precision performance, but also added half precision, since that is all that is needed for deep learning. Deep learning might well surpass scientific computing as a GPU market.

 [NOTE: Visit the University of Michigan at SC16 at booth 1543.]