Explore ARCExplore ARC

U-M partners with Cavium on Big Data computing platform

By | Feature, General Interest, Happenings, HPC, News

A new partnership between the University of Michigan and Cavium Inc., a San Jose-based provider of semiconductor products, will create a powerful new Big Data computing cluster available to all U-M researchers.

The $3.5 million ThunderX computing cluster will enable U-M researchers to, for example, process massive amounts of data generated by remote sensors in distributed manufacturing environments, or by test fleets of automated and connected vehicles.

The cluster will run the Hortonworks Data Platform providing Spark, Hadoop MapReduce and other tools for large-scale data processing.

“U-M scientists are conducting groundbreaking research in Big Data already, in areas like connected and automated transportation, learning analytics, precision medicine and social science. This partnership with Cavium will accelerate the pace of data-driven research and opening up new avenues of inquiry,” said Eric Michielssen, U-M associate vice president for advanced research computing and the Louise Ganiard Johnson Professor of Engineering in the Department of Electrical Engineering and Computer Science.

“I know from experience that U-M researchers are capable of amazing discoveries. Cavium is honored to help break new ground in Big Data research at one of the top universities in the world,” said Cavium founder and CEO Syed Ali, who received a master of science in electrical engineering from U-M in 1981.

Cavium Inc. is a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking. The new U-M system will use dual socket servers powered by Cavium’s ThunderX ARMv8-A workload optimized processors.

The ThunderX product family is Cavium’s 64-bit ARMv8-A server processor for next generation Data Center and Cloud applications, and features high performance custom cores, single and dual socket configurations, high memory bandwidth and large memory capacity.

Alec Gallimore, the Robert J. Vlasic Dean of Engineering at U-M, said the Cavium partnership represents a milestone in the development of the College of Engineering and the university.

“It is clear that the ability to rapidly gain insights into vast amounts of data is key to the next wave of engineering and science breakthroughs. Without a doubt, the Cavium platform will allow our faculty and researchers to harness the power of Big Data, both in the classroom and in their research,” said Gallimore, who is also the Richard F. and Eleanor A. Towner Professor, an Arthur F. Thurnau Professor, and a professor both of aerospace engineering and of applied physics.

Along with applications in fields like manufacturing and transportation, the platform will enable researchers in the social, health and information sciences to more easily mine large, structured and unstructured datasets. This will eventually allow, for example, researchers to discover correlations between health outcomes and disease outbreaks with information derived from socioeconomic, geospatial and environmental data streams.

U-M and Cavium chose to run the cluster on Hortonworks Data Platform, which is based on open source Apache Hadoop. The ThunderX cluster will deliver high performance computer services for the Hadoop analytics and, ultimately, a total of three petabytes of storage space.

“Hortonworks is excited to be a part of forward-leading research at the University of Michigan exploring low-powered, high-performance computing,” said Nadeem Asghar, vice president and global head of technical alliances at Hortonworks. “We see this as a great opportunity to further expand the platform and segment enablement for Hortonworks and the ARM community.”

U-M joins NSF-funded SLATE project to simplify scientific collaboration on a massive scale

By | Feature, General Interest, Happenings, News, Research

From the Cosmic Frontier to CERN, New Platform Stitches Together Global Science Efforts

SLATE will enable creation of new platforms for collaborative science

Today’s most ambitious scientific quests — from the cosmic radiation measurements by the South Pole Telescope to the particle physics of CERN — are multi-institutional research collaborations requiring computing environments that connect instrumentation, data, and computational resources. Because of the scale of the data and the complexity of this science,  these resources are often distributed among university research computing centers, national high performance computing centers, or commercial cloud providers.  This can cause scientists to spend more time on the technical aspects of computation than on discoveries and knowledge creation, while computing support staff are required to invest more effort integrating domain specific software with limited applicability beyond the community served.  

With Services Layer At The Edge (SLATE), a $4 million project funded by the National Science Foundation, the University of Michigan joins a team led by the Enrico Fermi and Computation Institutes at University of Chicago to provide technology that simplifies connecting university and laboratory data center capabilities to the national cyberinfrastructure ecosystem. The University of Utah is also participating. Once installed, SLATE connects local research groups with their far-flung collaborators, allowing central research teams to automate the exchange of data, software and computing tasks among institutions without burdening local system administrators with installation and operation of highly customized scientific computing services. By stitching together these resources, SLATE will also expand the reach of domain-specific “science gateways” and multi-site research platforms.  

“Science, ultimately, is a collective endeavor. Most scientists don’t work in a vacuum, they work in collaboration with their peers at other institutions,” said Shawn McKee, a co-PI on the project and director of the Center for Network and Storage-Enabled Collaborative Computational Science at the University of Michigan. “They often need to share not only data, but systems that allow execution of workflows across multiple institutions. Today, it is a very labor-intensive, manual process to stitch together data centers into platforms that provide the research computing environment required by forefront scientific discoveries.”

SLATE works by implementing “cyberinfrastructure as code”, augmenting high bandwidth science networks with a programmable “underlayment” edge platform. This platform hosts advanced services needed for higher-level capabilities such as data and software delivery, workflow services and science gateway components.  

U-M  has numerous roles in the project including:

  • defining, procuring and configuring much of the SLATE hardware platform
  • working on the advanced networking aspects (along with Utah) which includes Software Defined Networking (SDN) and Network Function Virtualization (NFV),
  • developing the SLATE user interface and contributing to the core project design and implementation.

The project is similar to the OSiRIS project led by McKee, which also aims to remove bottlenecks to discovery posed by networking and data transfer infrastructure.

SLATE uses best-of-breed data center virtualization components, and where available, software defined networking, to enable automation of lifecycle management tasks by domain experts. As such, it simplifies the creation of scalable platforms that connect research teams, institutions and resources, accelerating science while reducing operational costs and development time. Since SLATE needs only commodity components, it can be used for distributed systems across all data center types and scales, thus enabling creation of ubiquitous, science-driven cyberinfrastructure.

slateAt UChicago, the SLATE team will partner with the Research Computing Center and Information Technology Services to help the ATLAS experiment at CERN, the South Pole Telescope and the XENON dark matter search collaborations create the advanced cyberinfrastructure necessary for rapidly sharing data, computer cycles and software between partner institutions.  The resulting systems will provide blueprints for national and international research platforms supporting a variety of science domains.  

For example, the SLATE team will work with researchers from the Computation Institute’s Knowledge Lab to develop a hybrid platform that elastically scales computational social science applications between commercial cloud and campus HPC resources. The platform will allow researchers to use their local computational resources with the analytical tools and sensitive data shared through Knowledge Lab’s Cloud Kotta infrastructure, reducing cost and preserving data security.

“SLATE is about creating a ubiquitous cyberinfrastructure substrate for hosting, orchestrating and managing the entire lifecycle of higher level services that power scientific applications that span multiple institutions,” said Rob Gardner, a Research Professor in the Enrico Fermi Institute and Senior Fellow in the Computation Institute. “It clears a pathway for rapidly delivering capabilities to an institution, maximizing the science impact of local research IT investments.”

Many universities and research laboratories use a “Science DMZ” architecture to balance security with the ability to rapidly move large amounts of data in and out of the local network. As sciences from physics to biology to astronomy become more data-heavy, the complexity and need for these subnetworks grows rapidly, placing additional strain on local IT teams.

That stress is further compounded when local scientists join multi-institutional collaborations, often requiring the installation of specialized, domain-specific services for the sharing of compute and data resources.

With SLATE, research groups will be able to fully participate in multi-institutional collaborations and contribute resources to their collective platforms with minimal hands-on effort from their local IT team. When joining a project, the researchers and admins can select a package of software from a cloud-based service — a kind of “app store” — that allows them to connect and work with the other partners.

“Software and data can then be updated automatically by experts from the platform operations and research teams, with little to no assistance required from local IT personnel,” said Joe Breen, Senior IT Architect for Advanced Networking Initiatives at the University of Utah’s Center for High Performance Computing. “While the SLATE platform is designed to work in any data center environment, it will utilize advanced network capabilities, such as software defined overlay networks, when the devices support it.”

By reducing the technical expertise and time demands for participating in multi-institution collaborations, the SLATE platform will be especially helpful to smaller universities that lack the resources and staff of larger institutions and computing centers. The SLATE functionality can also support the development of “science gateways” which make it easier for individual researchers to connect to HPC resources such as the Open Science Grid and XSEDE.

“A central goal of SLATE is to lower the threshold for campuses and researchers to create research platforms within the national cyberinfrastructure,” Gardner said.

Initial partner sites for testing the SLATE platform and developing its architecture include New Mexico State University and Clemson University, where the focus will be creating distributed  cyberinfrastructure in support of large scale bioinformatics and genomics workflows. The project will also work with the Science Gateways Community Institute, an NSF funded Scientific Software Innovation Institute, on SLATE integration to make gateways more powerful and reach more researchers and resources.

###

The Computation Institute (CI), a joint initiative of the University of Chicago and Argonne National Laboratory, is an intellectual nexus for scientists and scholars pursuing multi-disciplinary research and a resource center for developing and applying innovative computational approaches. Founded in 1999, it is home to over 100 faculty, fellows, and staff researching complex, system-level problems in such areas as biomedicine, energy and climate, astronomy and astrophysics, computational economics, social sciences and molecular engineering. CI is home to diverse projects including the Center for Robust Decision Making on Climate and Energy Policy, Knowledge Lab, The Urban Center for Computation and Data and the Center for Data Science and Public Policy.

For more information, contact Dan Meisler, Communications Manager, Advanced Research Computing at U-M: dmeisler@umich.edu, 734-764-7414

New on-campus data-science and computational research services available

By | Feature, General Interest, News | No Comments

Researchers across campus now have access to several new services to help them navigate the new tools and methodologies emerging for data-intensive and computational research.

As part of the U-M Data Science Initiative announced in fall 2015, Consulting for Statistics, Computing and Analytics Research (CSCAR) is offering new and expanded services, including guidance on:

  • Research methodology for data science.
  • Large scale data processing using high performance computing systems.
  • Optimization of code and use of Flux and other advanced computing systems.
  • Advanced data management.
  • Geospatial data analyses.
  • Exploratory analysis and data visualization.
  • Obtaining licensed data from commercial sources.
  • Scraping, aggregating and integrating data from public sources.
  • Analysis of restricted data.

“With Big Data and computational simulations playing an ever-larger role in research in a variety of fields, it’s increasingly important to provide researchers with a comprehensive ecosystem of support and services that address those methodologies,” said CSCAR Director Kerby Shedden.

As part of this significant expansion of its scope, the campuswide statistical consulting service CSCAR has been renamed Consulting for Statistics, Computing and Analytics Research. It was formerly known as the Center for Statistical Consultation and Research.

For more information, see the University Record article.