Flux user meetup — Fri., 10/2

By | Educational, Events

Users of high performance computing resources are invited to meet Flux operators and support staff in person at an upcoming user meeting:

  • Friday, Oct. 2, 1 – 5 p.m., Medical Sciences Building I, Room 7323 (1301 Catherine St.)

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource,Flux, Hadoop, XSEDE, Amazon, or other.

Ask any questions you may have. The Flux staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples potential topics:

• What ARC-TS services are there, and how to access them?
• How to make the most of PBS and learn its features specific to your work?
• I want to do X, do you have software capable of it?
• What is special about GPU/Xeon Phi/Accelerators?
• Are there resources for people without budgets?
• I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
• I want to learn more about the compiler and debugging?
• I want to learn more about performance tuning, can you look at my code with me?
• Etc.

REMINDER: Flux Summer 2015 Outage — July 31 through Aug. 7

By | Educational, Events, General Interest, News

Flux, Flux Hadoop, and their storage systems (/home, /home2, /scratch, and HDFS) will be unavailable starting at 7 a.m. Friday, July 31st, returning to service on Friday, August 7th.

  • During this time, the following updates have been planned:
  • OS and supporting software updates for the cluster.  This will be a minor update to the currently installed RedHat version (RHEL 6.5)
  • Cluster management software will be updated and reconfigured onto new hardware
  • Networking for the HPC and Hadoop clusters and the supporting compute servers will be reconfigured.  Due to these changes, some hostnames will change.  We will update everyone when those new names become definitive.
  • Updates to the Lustre filesystem /scratch (both configuration and software)
  • Updates to some default software versions and retirement of some older software packages and/or versions.
  • Software updates to the Hadoop cluster

Status updates will be posted on the ARC-TS Twitter feed (https://twitter.com/arcts_um).

Open meeting for HPC users at U-M — May 22

By | Educational, Events

Users of high performance computing resources are invited to meet Flux operators and support staff in person at an upcoming user meeting:

  • Friday, May 22, 1 – 5 p.m., NCRC Building 520, Room 1122 (Directions)

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Nyx, XSEDE, or other.

Ask any questions you may have. The Flux staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples potential topics:

  • What Flux/ARC services are there, and how to access them?
  • How to make the most of PBS and learn its features specific to your work?
  • I want to do X, do you have software capable of it?
  • What is special about GPU/Xeon Phi/Accelerators?
  • Are there resources for people without budgets?
  • I want to apply for grant X, but it has certain limitations. What support can ARC provide?
  • I want to learn more about the compiler and debugging?
  • I want to learn more about performance tuning, can you look at my code with me?
  • Etc.

For more information, contact Brock Palen (brockp@umich.edu) at the College of Engineering; Dr. Charles Antonelli (cja@umich.edu) at LSA; Jeremy Hallum (jhallum@umich.edu) at the Medical School; or Vlad Wielbut (wlodek@umich.edu) at SPH.

Intel Xeon Phi cards now available on Flux

By | News
Eight Intel Xeon Phi 5110p cards are now available from ARC-TS as a technology preview. These are known as Many Integrated Core or MIC architectures, and consist of accelerator cards that fit into a Flux compute nodes. A code can then offload portions or all of the work of a compute job to the card. This can often result in improved performance.
 
As a technology preview, there is no additional cost for using the Phis. Anyone with an active Flux allocation can test the Phis, as long as the jobs are less than 24 hours long.
 
Read posts on the Flux HPC blog to get started and to see an example of the speedup a Phi can provide in Linear Algebra.

Flux, Nyx outage scheduled for March 28

By | Events, News

The ARC cluster Flux and Engineering cluster Nyx will be unavailable for jobs March 28th at 10:00pm. There is an emergency update to the ITS Value Storage systems on that date.
http://status.its.umich.edu/outage.php?id=93178

Flux and Nyx rely on Value Storage and thus will also not be available during that time.  We expect the outage to be finished quickly and any queued jobs will run as expected once the service is completed.

At the start of the outage, login and transfer nodes will be rebooted.  Users will be unable to login until after the service is restored.

Any jobs that request more walltime than remains until the start of the outage will be held and started after the systems return to service.

To find the maximum walltime you can request and have your job start prior to the outage can be found with our walltime calculator.

module load flux-utils
maxwalltime

Allocations that are active on that date will be extended by one day at no cost.

If you have any questions feel free to ask us at hpc-support@umich.edu
For immediate updates watch: https://twitter.com/umcoecac

Data will be deleted from /scratch on Flux if unused for 90 days

By | General Interest, News

Over the past several months, a huge amount of data (491 TB) has accumulated in the /scratch directory on the Flux computing cluster. /scratch is meant for data relating to currently running jobs, and the buildup of data is threatening the performance of Flux for all users.

Therefore, ARC will begin deleting data from /scratch that have not been accessed for 90 consecutive days.

Flux account owners with unused data have begun receiving emails warning that their data will be deleted.

Account owners in this situation can move their data to another system such as ITS Value Storage or their own equipment using the dedicated transfer nodes on Flux with high speed network connections available for that purpose.

For more information on Value Storage, see the ITS website.

For more information on transfer nodes, see the ARC website.

If you have any questions, please contact hpc-support@umich.edu.