The convergence of big data and machine learning is giving rise to a new computational social science. Dr. Sandy Pentland, Director of the MIT Connection Science and Human Dynamics labs, will describe some of the insights gained in the areas of collective action, management, and decision science, and how his group is integrating these insights into monitoring and shaping investment in support of the UN’s Sustainable Development Goals, and formulation of new privacy and security regulations in both the EU and US.
Users of high performance computing resources are invited to meet Flux operators and support staff in person at an upcoming user meeting:
- Friday, December 18, 1-5pm 1180 Duderstadt Center
There is not a set agenda; come at any time and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Hadoop, XSEDE, Amazon, or other.
Ask any questions you may have. The Flux staff will work with you on your specific projects, or just show you new things that can help you optimize your research.
This is also a good time to meet other researchers doing similar work.
This is open to anyone interested; it is not limited to Flux users.
Examples potential topics:
• What ARC-TS services are there, and how to access them?
• How to make the most of PBS and learn its features specific to your work?
• I want to do X, do you have software capable of it?
• What is special about GPU/Xeon Phi/Accelerators?
• Are there resources for people without budgets?
• I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
• I want to learn more about the compiler and debugging?
• I want to learn more about performance tuning, can you look at my code with me?
• Etc.
XSEDE16, the 5th annual conference of the Extreme Science and Engineering Discovery Environment, will take place July 17-21, 2016 in Miami. The conference will showcase the work of researchers who use XSEDE resources and services, as well as other digital resources and services throughout the world. The theme is “DIVERSITY, BIG DATA, AND SCIENCE AT SCALE: Enabling the Next Generation of Science and Technology.”
The Technical Program includes the following tracks: Accelerating Discovery in Scholarly Research; Advanced Cyberinfrastructure Technology; Software and Software Environments; Visualization Best Practices; Posters; Tutorials; and Birds of a Feather (BOFs). Additional opportunities are also available for student-led work by high-schoolers, undergraduates, and graduate students.
Submissions are now being accepted for tutorials and technical papers:
April 1: Tutorial submission deadline
April 15: Technical papers abstract deadline
April 22: Technical papers submission deadline.
Visit the conference site for more information.
As part of the Michigan Institute for Computational Discovery and Engineering (MICDE) Seminar Series, Thomas Hughes, leader of the ICES Computational Mechanics Group at the University of Texas at Austin will speak on the U-M campus this week.
Hughes will speak at 4 p.m., Wednesday, Dec. 2, in the Johnson Rooms in the Lurie Engineering Center, 1221 Beal. The title of this talk is “Isogeometric Analysis: Ten Years Later.”
For more information, visit the MICDE event page.
Mesrob Ohannessian, a postdoctoral researcher at UC San Diego, will speak on the U-M campus this week as part of the Michigan Institute for Data Science (MIDAS) Seminar Series.
Ohannessian will speak at 4 p.m., Friday, Dec. 4, at 1200 EECS. The title of his talk is “Computation-Statistics Tradeoffs in Unsupervised Learning via Data Summarization.”
See the MIDAS event page for more details.
Nitro, a high-throughput job scheduler from Adaptive Computing, is now available on Flux. Nitro is designed to schedule thousands to millions of tasks very quickly, working in conjunction with the existing Torque scheduler.
Nitro speeds up the process of scheduling very short jobs. Nitro lets researchers with large numbers of such jobs to submit one PBS job that will then run through a list of shorter tasks.
Adaptive Computing’s website describes it this way: “Nitro facilitates the execution of small compute tasks on a very large scale and without the overhead of individual scheduler jobs. Instead of creating individual jobs, Nitro combines all of the compute tasks into a single file. The file is then sent to Nitro as part of a job, and Nitro distributes the compute tasks across the allocated nodes. Tasks are executed on multiple threads on each compute node. Since the overhead of managing these tasks is small, most of the allocated compute resources can be spent executing the desired tasks.”
For more information, visit arc-ts.umich.edu/nitro or send a message to hpc-support@umich.edu.
The School of Public Health will now share the costs of access to the Flux shared computing cluster with its researchers. The Medical School, the College of Engineering, and the College of Literature, Science and the Arts also share costs of Flux access.
Updated rates for SPH researchers are can be found on the Ordering Services page of the Advanced Research Computing – Technology Services website.
Please email hpc-support@umich.edu with any questions.
University of Michigan researchers and staff took part in demonstrations, talks and a “Parallel Computing 101” tutorial as part of the University’s presence at the Supercomputing 15 conference Nov. 15-20 at the Austin Convention Center in Austin, Texas. Below is a summary of U-M activities.
- U-M’s booth featured three demonstrations: SDN Optimized High-Performance Data Transfer Systems for Exascale Science, LHCONE Point2point Service with Data Transfer Nodes, and Lyuba the Mammoth: Collaborative Exploration of Volumetric Data. See descriptions below for details. As in previous years, U-M and Michigan State University shared a booth.
The following talks took place at the U-M booth:- Azher Mughal, Caltech: “Programming OpenFlow Flows for Fun (and Scientific Profit)”
- Shawn McKee, University of Michigan, “Simple Infrastructure to Exploit 100G Wide-Area Networks for Data Intensive Science” ; and “OSiRIS: Open Storage Research InfraStructure.”
- Kaushik De, University of Texas at Arlington: “PANDA: Update on the ATLAS Global Workflow System and High Performance Networks.”
- U-M professors Quentin Stout (Electrical Engineering and Computer Science) and Christiane Jablonowski (Climate and Space Sciences and Engineering) taught a tutorial titled “Parallel Computing 101.” Stout has attended every Supercomputing conference since they started in 1988, and has taught this tutorial 18 times.
- Matt Britt, the HPC Operations Programming Manager for U-M’s Advanced Research Computing – Technology Services, spoke at Adaptive Computing’s booth (#833) on the tools ARC-TS is using to enable new utilization (Nitro) and gaining deeper insight into our HPC resources (ELK stack, Graphite and Grafana).
- Sharon Broude Geva, U-M Director of Advanced Research Computing, led a Birds of a Feather session titled “Strategies for Academic HPC Centers”. She was also a panelist in the Women in HPC workshop, in a session titled “Improving diversity at Supercomputing”.
- Amy Liebowitz, a network planning analyst at U-M Information Technology and Services, was one of five women chosen to receive funding under the “Women in IT Networking at SC (WINS)” program. The program is a collaboration between University Corporations for Atmospheric Research (UCAR), the Department of Energy’s Energy Sciences Network (ESnet) and the Keystone Initiative for Network Based Education and Research (KINBER), and pays a week of expenses for women to attend the conference and work on the SCInet networking system. This was Liebowitz’s first time at the conference; he said she planned to attend for a week, but the funding allowed her to stay for two weeks.
Demonstrations:
Title: SDN Optimized High-Performance Data Transfer Systems for Exascale Science
Booth: California Institute of Technology / CACR #1248, University of Michigan #2103, Stanford University #2009, OCC #749, Vanderbilt #271, Dell #1009, and Echostreams #582
Description: The next generation of major science programs face unprecedented challenges in harnessing the wealth of knowledge hidden in Exabytes of globally distributed scientific data. Researchers from Caltech, FIU, Stanford, Univ of Michigan, Vanderbilt, UCSD, UNESP, along with other partner teams have come together to meet these challenges, by leveraging the recent major advances in software defined and Terabit/sec networks, workflow optimization methodologies, and state of the art long distance data transfer methods. This demonstration focuses on network path-building and flow optimizations using SDN and intelligent traffic engineering techniques, built on top of a 100G OpenFlow ring at the show site and connected to remote sites including the Pacific Research Platform (PRP). Remote sites will be interconnected using dedicated WAN paths provisioned using NSI dynamic circuits. The demonstrations include (1) the use of Open vSwitch (OVS) to extend the wide area dynamic circuits storage to storage, with stable shaped flows at any level up to wire speed, (2) a pair of data transfer nodes (DTNs) designed for an aggregate 400Gbps flow through 100GE switches from Dell, Inventec and Mellanox, and (3) the use of Named Data Networking (NDN) to distribute and cache large high energy physics and climate science datasets.
Title: LHCONE Point2point Service with Data Transfer Nodes
Booth: California Institute of Technology # 1248, Univ of Michigan #2103, Vanderbilt University #271
Description: LHCONE (LHC Open Network Environment) is an globally distributed specialized environment in which the large volumes of LHC data are transferred among different international LHC Tier (data center and analysis) sites. To date these transfers are conducted over the LHC Optical Private Network (LHCOPN, dedicated high capacity circuits between LHC Tier1s (data centers)) and via LHCONE, currently based on L2+VRF services. The LHCONE Point2Point Service is planning to future-proof ways of networking for LHC – e.g., by providing support for OpenFlow. This demonstration will show how this goal can be accomplished, using dynamic path provisioning and at least one Data Transfer Node (DTN) in the US connected to at least one DTN in Europe, transferring LHC data between tiers. This demonstration will show a network services model that matches the requirements of LHC high energy physics research with emerging capabilities for programmable networking. This demonstration will integrate programmable networking techniques, including the Network Service Interface (NSI), a protocol defined within the Open Grid Forum standards organization. Multiple LHC sites will transfer LHC data through DTNs. DTNs are edge nodes that are specifically designed to optimize for high performance data transport, they have no other function. The DTNs will be connected by layer 2 circuits, created through dynamic requests by NSI.
Title: Lyuba the Mammoth: Collaborative Exploration of Volumetric Data
Booth: University of Michigan #2103
Description: This demonstration depicts Lyuba, a mammoth calf unearthed in 2007 after 50,000 years in Siberia. It is thought Lyuba died of asphyxiation after falling into the mud hole which ultimately preserved her. Her specimen is considered the best preserved mammoth mummy in the world, and is currently on display in the Shemanovsky Museum and Exhibition Center in Salekhard, Russia.
University of Michigan Professor Daniel Fisher and his colleagues at the U-M Museum of Paleontology arranged to have the mummy scanned using X-Ray computed tomography in Ford Motor Company’s Non- destructive Evaluation Laboratory. A color map was then applied to the density data by Research Museum Collection Manager Adam Rountrey to reveal the internal anatomical structures. This data was then provided to the UM3D Lab, a service of the Digital Media Commons and U-M Library, as an image stack for interactive volumetric visualization. The stack comprises 1,132 JPEG image slices with 762×700 pixel resolution per slice. Each of the resulting voxels is 1mm cubed.
When this data is brought into the 3D Lab’s Jugular software, it allows the user to interactively slice through the volume by manipulating a series of hexagonal planes. For this demo, users at SC15 in Austin, Texas, can occupy the same virtual space as another user situated in the immersive virtual reality MIDEN in Ann Arbor, Michigan. Via a Kinect sensor in Aus- tin, a 3D mesh of the user will be projected into the MIDEN alongside Lyuba allowing for simultaneous interaction and exploration of the data.