Many people bandy about the notion of “parallelism,” saying such things as, “This optimization makes my application more parallel,” with only a hazy intuition about what they’re actually saying. Others cite Amdahl’s Law, which provides bounds on speedup due to parallelism, but which does not actually quantify parallelism. In this talk, Prof. Leiserson will review a general and precise quantification of parallelism provided by theoretical computer science, which every computer scientist and parallel programmer should know. He will argue that parallel-programming models should eschew concurrency and encapsulate nondeterminism. Finally, he’ll discuss why the impending end of Moore’s Law — the economic and technological trend that the number of transistors per semiconductor chip doubles every two years — will bring new poignancy to these issues.

There will be a reception afterwards in the third floor alcove of the Beyster building for people who attend this seminar, or the CSE Distinguished Lecture featuring Rosalind Picard, Founder and Director of the Affective Computing Research Group, Professor of Computer Science and Engineering, Massachusetts Institute of Technology. More information on that event is available on the CSE web site.

What the $#@! Is Parallelism? (And Why Should Anyone Care?)

3 – 4 p.m., Monday, March 23
Cooley G906

Charles E. Leiserson received his B.S. from Yale University in 1975 and his Ph.D. from Carnegie Mellon University in 1981.  He is currently Professor of Computer Science and Engineering at MIT, where he holds the position of Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS).  He is a Margaret MacVicar Faculty Fellow, the highest recognition at MIT for undergraduate teaching.  He is a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a member of the Lab’s Theory of Computation Group (TOC), and head of the Lab’s Supertech Research Group.  Professor Leiserson is an ACM Fellow, a AAAS Fellow, and a senior member of IEEE and SIAM.

He is coauthor of Introduction to Algorithms (The MIT Press), one of the most cited publications in computer science.  ​He​​ has received numerous awards, including the 2013 ACM Paris Kanellakis Theory and Practice Award, the 2014 IEEE Taylor L. Booth Education Award, and the 2014 ACM-IEEE Computer Society Ken Kennedy High-Performance Computing Award.  Among his research contributions are the retiming method of digital-circuit optimization, the fat-tree interconnection network, the notion of cache-oblivious algorithms, and the Cilk multithreaded programming technology.  His current research centers on software performance engineering: how to engineer code that runs fast​ and consumes a small memory footprint.