Skip to main content

Cluster Status Notice Category

GPFS outage

Apr. 19, 2019—Around 2PM today a GPFS manager node had an issue and caused the GPFS filesystem’s to hang across the cluster, making logins and file access unresponsive. The issue was corrected at 3PM today and all compute nodes seem to have recovered. Please check the output of running jobs just to be safe.

Read more


Scheduled Maintenance on Public Gateways, Saturday April 20th 7am-9am

Apr. 18, 2019—This Saturday morning, April 20th, we will be taking the public gateway and portal servers offline from 7 am to 9 am in order to reboot and upgrade the operating system to CentOS 7.6 from 7.4 and the GPFS filesystem to 5.0.2 from 4.2.3. Updating the entire cluster to GPFS 5 is an incremental but...

Read more


ACCRE networking problems fixed; make note of rules of thumb when reading or writing data to the cluster

Apr. 9, 2019—Update, 4/10/2019: Early this morning we applied some changes that appear to have resolved the network stability issues we were having yesterday. Feel free to resume normal activities on the cluster. We apologize for the interruption! On a related note, we have been observing intermittent sluggishness on /scratch and /data over the last several weeks....

Read more


[Resolved] Visualization portal maintenance Saturday morning is now complete

Mar. 14, 2019—Update, March 16: This maintenance is now complete. The ACCRE Visualization Portal will go down for scheduled maintenance on Saturday, March 16th, from 6 AM to 10 AM. This will only affect web access through the Visualization Portal, so users may still run jobs on the cluster and login through the gateway nodes via SSH....

Read more


[Resolved] SLURM scheduler is back online following outage

Mar. 5, 2019—Update, 3/5/2019: The scheduler is now operational. The impact on the cluster queue has been minimal. We are investigating to establish the exact reason of the stuck jobs in order to prevent this to happen again. Thank you for your patience. We are currently experiencing a SLURM overload caused by issues in killing processes related...

Read more


[Resolved] /scratch and /data are back online following weekend maintenance

Jan. 24, 2019—Update, 2/12/2019: /scratch and /data are back online and we are now accepting new jobs. We were never able to get the maintenance command to run successfully, but we were able to verify (with IBM’s assistance) the integrity of /scratch and /data, which is great news and means we will not need to take another...

Read more


Final Steps for CentOS 7 Upgrade

Jan. 10, 2019—Update, Jan 25: The CentOS 6 login is now closed. Original post below… It has been a long journey, but we are almost to the end! Please see below for a schedule of the final systems to be upgraded to CentOS 7. Note this schedule does not include a handful of custom/private gateways that still...

Read more


[Resolved] Full cluster downtime on Wednesday, Dec 19 starting at 6am; make sure to log out and halt any running processes before downtime starts

Dec. 10, 2018—Update, 12/20/2018: The GPU drivers upgrade on all Maxwell and Pascal nodes in the CentOS 7 cluster is now complete and the nodes are available to host jobs. Thank you for your patience. Update, 12/19/2018: The cluster is now back online and accessible for normal use again, with the exception of the GPU nodes. We...

Read more


[Resolved] Problems with GPFS; logins and jobs may be affected

Nov. 7, 2018—Update, 3pm: /home is back online. Please check your jobs’ output very carefully as it is likely that many will need to be re-run, especially if they were performing I/O to or from /home. Jobs performing I/O to /scratch, /data, or /dors may have survived. Please open a helpdesk ticket with us if you have...

Read more


[Resolved] Cluster unresponsive or sluggish for some users

Oct. 18, 2018—Update, 2:30pm: All clear – if you notice anything usual, submit a helpdesk ticket as always. We’re receiving reports from users this morning about the cluster being unresponsive or sluggish. We are investigating the issue and will have an update soon. Thanks!

Read more