University of Michigan researchers and staff took part in demonstrations, talks and a “Parallel Computing 101” tutorial as part of the University’s presence at the Supercomputing 15 conference Nov. 15-20 at the Austin Convention Center in Austin, Texas. Below is a summary of U-M activities.
- U-M’s booth featured three demonstrations: SDN Optimized High-Performance Data Transfer Systems for Exascale Science, LHCONE Point2point Service with Data Transfer Nodes, and Lyuba the Mammoth: Collaborative Exploration of Volumetric Data. See descriptions below for details. As in previous years, U-M and Michigan State University shared a booth.
The following talks took place at the U-M booth:- Azher Mughal, Caltech: “Programming OpenFlow Flows for Fun (and Scientific Profit)”
- Shawn McKee, University of Michigan, “Simple Infrastructure to Exploit 100G Wide-Area Networks for Data Intensive Science” ; and “OSiRIS: Open Storage Research InfraStructure.”
- Kaushik De, University of Texas at Arlington: “PANDA: Update on the ATLAS Global Workflow System and High Performance Networks.”
- U-M professors Quentin Stout (Electrical Engineering and Computer Science) and Christiane Jablonowski (Climate and Space Sciences and Engineering) taught a tutorial titled “Parallel Computing 101.” Stout has attended every Supercomputing conference since they started in 1988, and has taught this tutorial 18 times.
- Matt Britt, the HPC Operations Programming Manager for U-M’s Advanced Research Computing – Technology Services, spoke at Adaptive Computing’s booth (#833) on the tools ARC-TS is using to enable new utilization (Nitro) and gaining deeper insight into our HPC resources (ELK stack, Graphite and Grafana).
- Sharon Broude Geva, U-M Director of Advanced Research Computing, led a Birds of a Feather session titled “Strategies for Academic HPC Centers”. She was also a panelist in the Women in HPC workshop, in a session titled “Improving diversity at Supercomputing”.
- Amy Liebowitz, a network planning analyst at U-M Information Technology and Services, was one of five women chosen to receive funding under the “Women in IT Networking at SC (WINS)” program. The program is a collaboration between University Corporations for Atmospheric Research (UCAR), the Department of Energy’s Energy Sciences Network (ESnet) and the Keystone Initiative for Network Based Education and Research (KINBER), and pays a week of expenses for women to attend the conference and work on the SCInet networking system. This was Liebowitz’s first time at the conference; he said she planned to attend for a week, but the funding allowed her to stay for two weeks.
Demonstrations:
Title: SDN Optimized High-Performance Data Transfer Systems for Exascale Science
Booth: California Institute of Technology / CACR #1248, University of Michigan #2103, Stanford University #2009, OCC #749, Vanderbilt #271, Dell #1009, and Echostreams #582
Description: The next generation of major science programs face unprecedented challenges in harnessing the wealth of knowledge hidden in Exabytes of globally distributed scientific data. Researchers from Caltech, FIU, Stanford, Univ of Michigan, Vanderbilt, UCSD, UNESP, along with other partner teams have come together to meet these challenges, by leveraging the recent major advances in software defined and Terabit/sec networks, workflow optimization methodologies, and state of the art long distance data transfer methods. This demonstration focuses on network path-building and flow optimizations using SDN and intelligent traffic engineering techniques, built on top of a 100G OpenFlow ring at the show site and connected to remote sites including the Pacific Research Platform (PRP). Remote sites will be interconnected using dedicated WAN paths provisioned using NSI dynamic circuits. The demonstrations include (1) the use of Open vSwitch (OVS) to extend the wide area dynamic circuits storage to storage, with stable shaped flows at any level up to wire speed, (2) a pair of data transfer nodes (DTNs) designed for an aggregate 400Gbps flow through 100GE switches from Dell, Inventec and Mellanox, and (3) the use of Named Data Networking (NDN) to distribute and cache large high energy physics and climate science datasets.
Title: LHCONE Point2point Service with Data Transfer Nodes
Booth: California Institute of Technology # 1248, Univ of Michigan #2103, Vanderbilt University #271
Description: LHCONE (LHC Open Network Environment) is an globally distributed specialized environment in which the large volumes of LHC data are transferred among different international LHC Tier (data center and analysis) sites. To date these transfers are conducted over the LHC Optical Private Network (LHCOPN, dedicated high capacity circuits between LHC Tier1s (data centers)) and via LHCONE, currently based on L2+VRF services. The LHCONE Point2Point Service is planning to future-proof ways of networking for LHC – e.g., by providing support for OpenFlow. This demonstration will show how this goal can be accomplished, using dynamic path provisioning and at least one Data Transfer Node (DTN) in the US connected to at least one DTN in Europe, transferring LHC data between tiers. This demonstration will show a network services model that matches the requirements of LHC high energy physics research with emerging capabilities for programmable networking. This demonstration will integrate programmable networking techniques, including the Network Service Interface (NSI), a protocol defined within the Open Grid Forum standards organization. Multiple LHC sites will transfer LHC data through DTNs. DTNs are edge nodes that are specifically designed to optimize for high performance data transport, they have no other function. The DTNs will be connected by layer 2 circuits, created through dynamic requests by NSI.
Title: Lyuba the Mammoth: Collaborative Exploration of Volumetric Data
Booth: University of Michigan #2103
Description: This demonstration depicts Lyuba, a mammoth calf unearthed in 2007 after 50,000 years in Siberia. It is thought Lyuba died of asphyxiation after falling into the mud hole which ultimately preserved her. Her specimen is considered the best preserved mammoth mummy in the world, and is currently on display in the Shemanovsky Museum and Exhibition Center in Salekhard, Russia.
University of Michigan Professor Daniel Fisher and his colleagues at the U-M Museum of Paleontology arranged to have the mummy scanned using X-Ray computed tomography in Ford Motor Company’s Non- destructive Evaluation Laboratory. A color map was then applied to the density data by Research Museum Collection Manager Adam Rountrey to reveal the internal anatomical structures. This data was then provided to the UM3D Lab, a service of the Digital Media Commons and U-M Library, as an image stack for interactive volumetric visualization. The stack comprises 1,132 JPEG image slices with 762×700 pixel resolution per slice. Each of the resulting voxels is 1mm cubed.
When this data is brought into the 3D Lab’s Jugular software, it allows the user to interactively slice through the volume by manipulating a series of hexagonal planes. For this demo, users at SC15 in Austin, Texas, can occupy the same virtual space as another user situated in the immersive virtual reality MIDEN in Ann Arbor, Michigan. Via a Kinect sensor in Aus- tin, a 3D mesh of the user will be projected into the MIDEN alongside Lyuba allowing for simultaneous interaction and exploration of the data.