University of Michigan researcher contributes to NASA findings on carbon in the atmosphere showcased in the journal Science

By | General Interest, Happenings, News | No Comments

 

High-resolution satellite data from NASA’s Orbiting Carbon Observatory-2 are revealing the subtle ways that carbon links everything on Earth – the ocean, land, atmosphere, terrestrial ecosystems and human activities. Scientists using the first 2 1/2 years of OCO-2 data have published a special collection of five papers today in the journal Science that demonstrates the breadth of this research. In addition to showing how drought and heat in tropical forests affected global carbon dioxide levels during the 2015-16 El Niño, other results from these papers focus on ocean carbon release and absorption, urban emissions and a new way to study photosynthesis. A final paper by OCO-2 Deputy Project Scientist Annmarie Eldering of NASA’s Jet Propulsion Laboratory in Pasadena, California, and colleagues gives an overview of the state of OCO-2 science.

Manish Verma, a Geospatial/Data Science Consultant at the University of Michigan’s Consulting for Statistics, Computing and Analytics Research (CSCAR) unit, contributed as a coauthor to an article on a new way to measure photosynthesis over time and space.

Using data from the OCO-2, Verma’s analysis helped expand the utility of measurements of solar induced fluorescence (SIF), which indicates active photosynthesis in plants. Verma’s work showed that SIF data collected from the OCO-2 satellite provides reliable information on the variability of photosynthesis at a much smaller scale — down to individual ecosystems.

This can, in turn, “lead to more reliable estimates of carbon sources — that is, when, where, why and how carbon is exchanged between land and atmosphere — as well as a deeper understanding of carbon-climate feedbacks,” according to the Science article.

For more, see the NASA press release (https://www.nasa.gov/feature/jpl/new-insights-from-oco-2-showcased-in-science) and the Science article (http://science.sciencemag.org/content/358/6360/eaam5747.full)

Computational Science around U-M: Ph.D. Candidate Shannon Moran (Chemical Engineering) has won an ACM SIGHPC Intel Fellowship

By | Happenings, HPC, Uncategorized | No Comments

Moran_HighRes_SqShannon Moran, a Ph.D. Candidate in the department of Chemical Engineering, has won a 2017 SIGHPC Intel Fellowship. Shannon is a member of the Glotzer Group. They use computer simulation to discover the fundamental principles of how nanoscale systems of building blocks self-assemble, and to discover how to control the assembly process to engineer new materials.

ACM’s Special Interest Group on High Performance Computing is an international group with a major professional society that is devoted to the needs of students, faculty, researchers and practitioners in high performance computing. This year they awarded 12 fellowships around the country with the aim of increasing the diversity of students pursuing graduate degrees in data science and computational science, including women as well as students from racial/ethnic backgrounds that have been historically underrepresented in the computing field. The fellowship provides $15,000 annually for study anywhere in the world.

The fellowship is funded by Intel and is presented at the annual Super Computing conference that this year will take place in November 13-16 in Denver, Colorado.

U-M joins NSF-funded SLATE project to simplify scientific collaboration on a massive scale

By | Feature, General Interest, Happenings, News, Research | No Comments

From the Cosmic Frontier to CERN, New Platform Stitches Together Global Science Efforts

SLATE will enable creation of new platforms for collaborative science

Today’s most ambitious scientific quests — from the cosmic radiation measurements by the South Pole Telescope to the particle physics of CERN — are multi-institutional research collaborations requiring computing environments that connect instrumentation, data, and computational resources. Because of the scale of the data and the complexity of this science,  these resources are often distributed among university research computing centers, national high performance computing centers, or commercial cloud providers.  This can cause scientists to spend more time on the technical aspects of computation than on discoveries and knowledge creation, while computing support staff are required to invest more effort integrating domain specific software with limited applicability beyond the community served.  

With Services Layer At The Edge (SLATE), a $4 million project funded by the National Science Foundation, the University of Michigan joins a team led by the Enrico Fermi and Computation Institutes at University of Chicago to provide technology that simplifies connecting university and laboratory data center capabilities to the national cyberinfrastructure ecosystem. The University of Utah is also participating. Once installed, SLATE connects local research groups with their far-flung collaborators, allowing central research teams to automate the exchange of data, software and computing tasks among institutions without burdening local system administrators with installation and operation of highly customized scientific computing services. By stitching together these resources, SLATE will also expand the reach of domain-specific “science gateways” and multi-site research platforms.  

“Science, ultimately, is a collective endeavor. Most scientists don’t work in a vacuum, they work in collaboration with their peers at other institutions,” said Shawn McKee, a co-PI on the project and director of the Center for Network and Storage-Enabled Collaborative Computational Science at the University of Michigan. “They often need to share not only data, but systems that allow execution of workflows across multiple institutions. Today, it is a very labor-intensive, manual process to stitch together data centers into platforms that provide the research computing environment required by forefront scientific discoveries.”

SLATE works by implementing “cyberinfrastructure as code”, augmenting high bandwidth science networks with a programmable “underlayment” edge platform. This platform hosts advanced services needed for higher-level capabilities such as data and software delivery, workflow services and science gateway components.  

U-M  has numerous roles in the project including:

  • defining, procuring and configuring much of the SLATE hardware platform
  • working on the advanced networking aspects (along with Utah) which includes Software Defined Networking (SDN) and Network Function Virtualization (NFV),
  • developing the SLATE user interface and contributing to the core project design and implementation.

The project is similar to the OSiRIS project led by McKee, which also aims to remove bottlenecks to discovery posed by networking and data transfer infrastructure.

SLATE uses best-of-breed data center virtualization components, and where available, software defined networking, to enable automation of lifecycle management tasks by domain experts. As such, it simplifies the creation of scalable platforms that connect research teams, institutions and resources, accelerating science while reducing operational costs and development time. Since SLATE needs only commodity components, it can be used for distributed systems across all data center types and scales, thus enabling creation of ubiquitous, science-driven cyberinfrastructure.

slateAt UChicago, the SLATE team will partner with the Research Computing Center and Information Technology Services to help the ATLAS experiment at CERN, the South Pole Telescope and the XENON dark matter search collaborations create the advanced cyberinfrastructure necessary for rapidly sharing data, computer cycles and software between partner institutions.  The resulting systems will provide blueprints for national and international research platforms supporting a variety of science domains.  

For example, the SLATE team will work with researchers from the Computation Institute’s Knowledge Lab to develop a hybrid platform that elastically scales computational social science applications between commercial cloud and campus HPC resources. The platform will allow researchers to use their local computational resources with the analytical tools and sensitive data shared through Knowledge Lab’s Cloud Kotta infrastructure, reducing cost and preserving data security.

“SLATE is about creating a ubiquitous cyberinfrastructure substrate for hosting, orchestrating and managing the entire lifecycle of higher level services that power scientific applications that span multiple institutions,” said Rob Gardner, a Research Professor in the Enrico Fermi Institute and Senior Fellow in the Computation Institute. “It clears a pathway for rapidly delivering capabilities to an institution, maximizing the science impact of local research IT investments.”

Many universities and research laboratories use a “Science DMZ” architecture to balance security with the ability to rapidly move large amounts of data in and out of the local network. As sciences from physics to biology to astronomy become more data-heavy, the complexity and need for these subnetworks grows rapidly, placing additional strain on local IT teams.

That stress is further compounded when local scientists join multi-institutional collaborations, often requiring the installation of specialized, domain-specific services for the sharing of compute and data resources.

With SLATE, research groups will be able to fully participate in multi-institutional collaborations and contribute resources to their collective platforms with minimal hands-on effort from their local IT team. When joining a project, the researchers and admins can select a package of software from a cloud-based service — a kind of “app store” — that allows them to connect and work with the other partners.

“Software and data can then be updated automatically by experts from the platform operations and research teams, with little to no assistance required from local IT personnel,” said Joe Breen, Senior IT Architect for Advanced Networking Initiatives at the University of Utah’s Center for High Performance Computing. “While the SLATE platform is designed to work in any data center environment, it will utilize advanced network capabilities, such as software defined overlay networks, when the devices support it.”

By reducing the technical expertise and time demands for participating in multi-institution collaborations, the SLATE platform will be especially helpful to smaller universities that lack the resources and staff of larger institutions and computing centers. The SLATE functionality can also support the development of “science gateways” which make it easier for individual researchers to connect to HPC resources such as the Open Science Grid and XSEDE.

“A central goal of SLATE is to lower the threshold for campuses and researchers to create research platforms within the national cyberinfrastructure,” Gardner said.

Initial partner sites for testing the SLATE platform and developing its architecture include New Mexico State University and Clemson University, where the focus will be creating distributed  cyberinfrastructure in support of large scale bioinformatics and genomics workflows. The project will also work with the Science Gateways Community Institute, an NSF funded Scientific Software Innovation Institute, on SLATE integration to make gateways more powerful and reach more researchers and resources.

###

The Computation Institute (CI), a joint initiative of the University of Chicago and Argonne National Laboratory, is an intellectual nexus for scientists and scholars pursuing multi-disciplinary research and a resource center for developing and applying innovative computational approaches. Founded in 1999, it is home to over 100 faculty, fellows, and staff researching complex, system-level problems in such areas as biomedicine, energy and climate, astronomy and astrophysics, computational economics, social sciences and molecular engineering. CI is home to diverse projects including the Center for Robust Decision Making on Climate and Energy Policy, Knowledge Lab, The Urban Center for Computation and Data and the Center for Data Science and Public Policy.

For more information, contact Dan Meisler, Communications Manager, Advanced Research Computing at U-M: dmeisler@umich.edu, 734-764-7414

MICDE sponsored miRcore Biotechnology Summer Camp for the second year in a row

By | Happenings, HPC, Uncategorized | No Comments

miRcoreBioTec2017This year’s miRcore’s Biotechnology summer camp was a big success.  The participants had hands-on experience in a wet-lab, and with the UNIX command line while accessing U-M’s High Performance Computing cluster, Flux, in a research setting. For the second year in a row MICDE and ARC-ts sponsored the campers to access Flux as they learned the steps that are needed to run code in a computer cluster. The camp also combined theoretical thermodynamic practices that gave participants an overall research experience in nucleotide biotechnology.

miRcore’s camps are designed to expose high school students to career opportunities in biomedicine and to provide research opportunities beyond the classroom setting. For more information please visit http://www.mircore.org/summer-camps/.

U-M, SJTU research teams share $1 million for data science projects

By | Data, General Interest, Happenings, News, Research | No Comments

Five research teams from the University of Michigan and Shanghai Jiao Tong University in China are sharing $1 million to study data science and its impact on air quality, galaxy clusters, lightweight metals, financial trading and renewable energy.

Since 2009, the two universities have collaborated on a number of research projects that address challenges and opportunities in energy, biomedicine, nanotechnology and data science.

In the latest round of annual grants, the winning projects focus on data science and how it can be applied to chemistry and physics of the universe, as well as finance and economics.

For more, read the University Record article.

For descriptions of the research projects, see the MIDAS/SJTU partnership page.

New Data Science Computing Platform Available to U-M Researchers

By | General Interest, Happenings, HPC, News | No Comments

Advanced Research Computing – Technology Services (ARC-TS) is pleased to announce an expanded data science computing platform, giving all U-M researchers new capabilities to host structured and unstructured databases, and to ingest, store, query and analyze large datasets.

The new platform features a flexible, robust and scalable database environment, and a set of data pipeline tools that can ingest and process large amounts of data from sensors, mobile devices and wearables, and other sources of streaming data. The platform leverages the advanced virtualization capabilities of ARC-TS’s Yottabyte Research Cloud (YBRC) infrastructure, and is supported by U-M’s Data Science Initiative launched in 2015. YBRC was created through a partnership between Yottabyte and ARC-TS announced last fall.

The following functionalities are immediately available:

  • Structured databases:  MySQL/MariaDB, and PostgreSQL.
  • Unstructured databases: Cassandra, MongoDB, InfluxDB, Grafana, and ElasticSearch.
  • Data ingestion: Redis, Kafka, RabbitMQ.
  • Data processing: Apache Flink, Apache Storm, Node.js and Apache NiFi.

Other types of databases can be created upon request.

These tools are offered to all researchers at the University of Michigan free of charge, provided that certain usage restrictions are not exceeded. Large-scale users who outgrow the no-cost allotment may purchase additional YBRC resources. All interested parties should contact hpc-support@umich.edu.

At this time, the YBRC platform only accepts unrestricted data. The platform is expected to accommodate restricted data within the next few months.

ARC-TS also operates a separate data science computing cluster available for researchers using the latest Hadoop components. This cluster also will be expanded in the near future.

MICDE announces 2017-2018 Fellowship recipients

By | Educational, General Interest, Happenings, News | No Comments

MICDE is pleased to announce the recipients of the 2017-2018 MICDE Fellowships for students enrolled in the PhD in Scientific Computing or the Graduate Certificate in Computational Discovery and Engineering. We had 91 applicants from 25 departments representing 6 schools and colleges. Due to the extraordinary number of high quality applications we increased the number of fellowships from 15 to 20 awards. See our Fellowship page for more information.

AWARDEES

Diksha Dhawan, Chemistry
Negar Farzaneh, Computational Medicine & Bioinformatics
Kritika Iyer, Biomedical Engineering
Tibin John, Neuroscience
Bikash Kanungo, Mechanical Engineering
Yu-Han Kao, Epidemiology
Steven Kiyabu, Mechanical Engineering
Christiana Mavroyiakoumou, Mathematics
Ehsan Mirzakhalili, Mechanical Engineering
Colten Peterson, Climate and Space Sciences & Engineering
James Proctor, Materials Science & Engineering
Evan Rogers, Biomedical Engineering
Longxiu Tian, S. Ross School of Business
Jipu Wang, Nuclear Engineering and Radiological Sciences
Yanming Wang, Chemistry
Zhenlin Wang, Mechanical Engineering
Alicia Welden, Chemistry
Anna White, Industrial & Operations Engineering
Chia-Nan Yeh, Physics
Yiling Zhang, Industrial & Operations Engineering

HONORABLE MENTIONS

Geunyeong Byeon, Industrial & Operations Engineering
Ayoub Gouasmi, Aerospace Engineering
Joseph Kleinhenz, Physics
Jia Li, Physics
Changjiang Liu, Biophysics
Vo Nguyen, Computational Medicine & Bioinformatics
Everardo Olide, Applied Physics
Qiyun Pan, Industrial & Operations Engineering
Pengchuan Wang, Civil & Environmental Engineering
Xinzhu Wei, Ecology & Evolutionary Biology

ARC-TS seeks input on next generation HPC cluster

By | Events, Flux, General Interest, Happenings, HPC, News | No Comments

The University of Michigan is beginning the process of building our next generation HPC platform, “Big House.”  Flux, the shared HPC cluster, has reached the end of its useful life. Flux has served us well for more than five years, but as we move forward with replacement, we want to make sure we’re meeting the needs of the research community.

ARC-TS will be holding a series of town halls to take input from faculty and researchers on the next HPC platform to be built by the University.  These town halls are open to anyone and will be held at:

  • College of Engineering, Johnson Room, Tuesday, June 20th, 9:00a – 10:00a
  • NCRC Bldg 300, Room 376, Wednesday, June 21st, 11:00a – 12:00p
  • LSA #2001, Tuesday, June 27th, 10:00a – 11:00a
  • 3114 Med Sci I, Wednesday, June 28th, 2:00p – 3:00p

Your input will help to ensure that U-M is on course for providing HPC, so we hope you will make time to attend one of these sessions. If you cannot attend, please email hpc-support@umich.edu with any input you want to share.

Job Opening: Research Cloud Administrator

By | General Interest, Happenings, News | No Comments

Advanced Research Computing – Technology Services (ARC-TS)  has an exciting opportunity for a Research Cloud Administrator.

This position will be part of a team working on a novel platform for research computing in the university for data science and high performance computing.  The primary responsibilities for this position will be to develop and create a resource sharing environment to enable execution of Data Science and HPC workflows using containers for University of Michigan researchers.

For more details and to apply, visit: http://careers.umich.edu/job_detail/142372/research_cloud_administrator_intermediate

HPC training workshops begin Monday, May 15

By | General Interest, Happenings, HPC, News | No Comments

series of training workshops in high performance computing will be held May 15, May 17 and May 24, 2017, presented by CSCAR in conjunction with Advanced Research Computing – Technology Services (ARC-TS). All sessions are held at East Hall, Room B254, 530 Church St.

Introduction to the Linux command Line
This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also known as the “command line.”
• Monday, May 15, 9 a.m. – noon. (full descriptionregistration)

Introduction to the Flux cluster and batch computing
This workshop will provide a brief overview of the components of the Flux cluster, including the resource manager and scheduler, and will offer students hands-on experience.
• Wednesday, May 17, 1 – 4:30 p.m. (full description | registration)

Advanced batch computing on the Flux cluster
This course will cover advanced areas of cluster computing on the Flux cluster, including common parallel programming models, dependent and array scheduling, and a brief introduction to scientific computing with Python, among other topics.
• Wednesday, May 24, 1 – 5 p.m. (full description | registration)

NOTE: Additional workshops may be scheduled if demand warrants. Please sign up for the waiting list if the workshops are full, and you will be given first priority for any additional sessions.